Sunday, May 7, 2017

A Romp Through Information Security Economics and Society With A Cambridge Professor

Romp may not be the exact word I was looking for but this piece is surprisingly breezy considering the depth of the ideas explored.
The sign of subject mastery and a sharp mind.
The author, Ross Anderson, is professor of security engineering at Cambridge University. He is one of the founders of the field of information security economics. This work started before 9/11, but was accelerated by the events of that day. Many security failures can be traced to wrong incentives rather than technical errors, and economic analysis has shed new light on many problems that were previously thought to be intractable. Security economics gives us deep insights into the safety and dependability of online systems, as well as into the more traditional security problems of interest to law enforcement and the insurance industry.

He is now bringing together security engineers with behavioral economists and psychologists to extend this work into the behavioral sciences. His research interests range from small-scale questions such as the nature of deception (why we lie, how we detect it, and how it's changing as we move online) to large-scale questions such as the misapprehension of risk and its manipulation (why our societies are so vulnerable to terrorism, and the evolution of the politics of fear)....
-from his Edge bio.

From Edge.org:

The Threat
Ross Anderson [5.8.17]
People who are able to live digitally enhanced lives, in the sense that they can use all the available tools to the fullest extent, are very much more productive and capable and powerful than those who are still stuck in meatspace. It’s as if you had a forest where all the animals could see only in black and white and, suddenly, along comes a mutation in one of the predators allowing it to see in color. All of a sudden it gets to eat all the other animals, at least those who can’t see in color, and the other animals have no idea what’s going on. They have no idea why their camouflage doesn’t work anymore. They have no idea where the new threat is coming from. That’s the kind of change that happens once people get access to really powerful online services.

So long as it was the case that everybody who could be bothered to learn had access to AltaVista, or Google, or Facebook, or whatever, then that was okay. The problem we’re facing now is that more and more capable systems are no longer open to all. They’re open to the government, to big business, and to powerful advertising networks....

The fascinating thing about doing research in information security is that over the past thirty years it has been a rapidly expanding field. We started off thirty years ago with only a couple of areas that we understood reasonably well—the mathematics around cryptography, and how you go about protecting operating systems by means of access controls and policy models—and the rest of it was a vast fog of wishful thinking, snake oil, and bad engineering.

There were only a few application areas that people really worried about thirty years ago: diplomatic and military communications at one end, and the security of things like cash machines at the other. As we’ve gone about putting computers and communications into just about everything that you can buy for more than ten bucks that you don't eat or drink, the field has grown. In addition to cash machines, people try and fiddle taximeters, tachographs, electricity meters, all sorts of devices around us. This has been growing over the past twenty years, and it brings all sorts of fascinating problems along with it.

As we have joined everything up together, we find that security is no longer something that you can do by fiat. Back in the old days—thirty years ago, for example, I was working for Barclays Bank looking after security of things like cash machines, and if you had a problem it could be resolved by going to the lowest common manager. In a bureaucratic way, things could be sorted by policy. But by the late 1990s this wasn’t the case anymore. All of a sudden you had everything being joined up through the World Wide Web and other Internet protocols, and suddenly the level of security that you got in a system was a function of the self-interested behavior of thousands or even millions of individuals.

This is something that I find truly fascinating. We’ve got artifacts such as the world payment system to study, where you've got billions of cards in issue, millions of merchants, tens of thousands of banks, and a whole bunch of different protocols. Plus, you’ve got a lot of greedy people who, even if they aren’t downright criminal, are trying to maximize their own welfare at the benefit of everybody else. Realizing this in the late ‘90s made us realize that we had to get economics on board. One of the phase changes, if you like, was that we started embracing social science. We did that not because it was a trendy thing to do to get grants to do multidisciplinary stuff, but because it was absolutely necessary. It became clear that to build decent systems, you had to understand game theory in addition to the cryptography, algorithms, and protocols that you used.

That came out of a collaboration with Hal Varian at Berkeley, who is now the chief economist at Google. In fact, across the tech industry you see that an understanding of network economics is now seen as a prerequisite for business. We now teach it to our undergraduates as well. If they’re going to have any idea about whether their start-up has got any chance whatsoever, or whether the firm that they’re thinking of joining might be around in five years’ time, then it’s useful to know these things.
It’s also important from the point of view of figuring out how you protect stuff. Although a security failure may be due to someone using the wrong type of access control mechanism or a weak cipher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms it’s simple and straightforward, but it’s often much more complicated when we start looking at how things fail in real life.

In the payment system, for example, you’ve got banks that issue cards to customers—issuing banks—and you’ve got acquiring banks, which are banks that buy in transactions from merchants and give them merchant terminals. If a bank gives a merchant cheaper terminals to save them money, there may be more fraud, but that fraud falls on the card-issuing banks. So you can end up with some quite unsatisfactory outcomes where there’s not much option but for a government to step in and regulate. Otherwise, you end up getting levels of fraud that are way higher than would be economically ideal.

The next thing that’s happened is that over the past ten years or so, we’ve begun to realize that as systems became tougher and more difficult to penetrate technically, the bad guys have been turning to the users. The people who use systems tend to have relatively little say in them because they are a dispersed interest. And in the case of modern systems funded by advertising, they’re not even the customer, they’re the product.

When you look at systems like Facebook, all the hints and nudges that the website gives you are towards sharing your data so it can be sold to the advertisers. They’re all towards making you feel that you’re in a much safer and warmer place than you actually are. Under those circumstances, it’s entirely understandable that people end up sharing information in ways that they later regret and which end up being exploited. People learn over time, and you end up with a tussle between Facebook and its users whereby Facebook changes the privacy settings every few years to opt everybody back into advertising, people protest, and they opt out again. This doesn’t seem to have any stable equilibrium.

Meanwhile, in society at large, what we have seen over the past fifteen years is that crime has gone online. This has been particularly controversial in the UK. Back in 2005, the then Labour government struck a deal with the banks and the police to the effect that fraud would be reported to the banks first and to the police afterwards. They did this quite cynically in order to massage down the fraud figures. The banks went along with it because they ended up getting control of the fraud investigations that were done, and the police were happy to have less work for their desk officers to do.

For a decade, chief constables and government ministers were claiming that “Crime is falling, we’re doing a great job”. Some dissident criminologists started to say, “Hang on a minute. Crime isn’t actually falling, it’s just going online like everything else.” A year and a half ago, the government started publishing honest statistics for the first time in a decade. They found, to their disquiet, that online and electronic crime is now several times the rate of the traditional variety. In fact, this year in Britain we expect about one million households will suffer a traditional property crime like burglary or car theft, and somewhere between three and four million—probably nearer four million—will suffer some kind of fraud, or scam, or abuse, almost all of which are now online or electronic.
From the point of view of the police force, we got policy wrong. The typical police force—our Cambridgeshire constabulary, for example, has one guy spending most of his time on cybercrime. That’s it. When we find that there’s an accommodation scam in Cambridge targeting new students, for example, it’s difficult to get anything done because the scammers are overseas, and those cases have to be referred to police units in London who have other things to do. Nothing joins up and, as a result, we end up with no enforcement on cybercrime, except for a few headline crimes that really annoy ministers.

We’ve got a big broken area of policy that’s tied to technology and also to old management structures that just don’t work. In a circumstance like this, there are two options for someone like me, a mathematician who became a computer scientist and an engineer. You can either retreat into a technical ghetto and say, “We will concentrate on developing better tools for X, Y, and Z,” or you can engage with the broader policy debate and start saying, “let’s collect the evidence and show what’s being done wrong so we can figure out ways of fixing it.”

Over the years I found myself changing from a mathematician into a hardware engineer, into an economist, into a psychologist. Now, I'm becoming somebody involved with criminology, policy, and law enforcement. That is something that I find refreshing. Before I became an academic, in the first dozen years of my working life, I would change jobs every year or three just so I kept moving and didn’t get bored. Since I’ve become an academic, I’ve been doing a different job every two or three years as the subject itself has changed. The things that we’re worried about, the kind of systems that are being hacked, have themselves also changed. And there’s no sign of this letting up anytime soon.
How did I end up becoming involved in advocacy? First of all, there’s the cultural background in that Cambridge has long been a haven for dissidents and heretics. The puritans came out of Cambridge after our Erasmus translated the Bible and “laid the egg that Luther hatched”. Then there was Newton. More recently, there have been people such as James Clerk Maxwell, of course, and Charles Darwin. So we are proud of our ability to shake things up, to destroy whole scientific disciplines and whole religions and replace them with something that’s better.

In my particular case, the spur was the crypto wars of the 1990s. Shortly after he got elected, in 1993, Bill Clinton was pitched by the National Security Agency with the idea of key escrow. The idea was that America should use its legislative and other might to see to it that all the cryptographic keys in the world were available to the NSA and its fellow agencies so that everything encrypted could be spied on. This drew absolute outrage from researchers in cryptography and security and also from the whole tech industry. At the time, people were starting to gear up for what became the dot-com boom. We were starting to get more and more people coming online. If you don’t have cryptography to protect people’s privacy and to protect their financial transactions, then how can you build the platform of trust on which the world in which we now live ends up being built?

A whole bunch of us who were doing research in cryptography got engaged in giving talks, lobbying the government, and pointing out that proposals to seize all our cryptographic keys would have very bad effects on business. This worked its way out in different ways in different countries. Here in Britain we had tussles with the Blair government, which started off being against key escrow, but was then rapidly persuaded by Al Gore to get onboard the American bandwagon. We had to push back on that.
Eventually, we got what’s now the Regulation of Investigatory Powers Act. In the process, I was involved in starting an NGO called the Foundation for Information Policy Research, and later on when it became clear that this was a European scale thing as well, European Digital Rights, which was set up in 2002 or 2003.

Europe’s contributions to ending the crypto wars came in the late 1990s when the European Commission passed the Electronic Signature Directive, which said that you could get a presumption of validity for electronic signatures, provided that the signing key wasn’t known to a third party. If you shared your key with the NSA or with GCHQ, as these agencies wanted, you wouldn’t get this special legal seal of approval for the transactions that you made, whether it was to buy your lunch or to sell your house. That was one of the things that ended the first crypto war.

Following on from that were other issues came along, issues concerning copyright, privacy, and data protection. I got particularly involved in issues around medical records—whether they can be kept confidential in an age where everything becomes electronic; where records eventually migrate to cloud services, and where you also have pervasive genomics. This is something in which I’ve worked off and on for twenty years.

In my case, working with real problems with real customers—and in the case of medicine, I was advising the BMA on safety and privacy for a while—puts things in perspective in a way that is sometimes hard if you’re just looking at the maths in front of a blackboard. It became clear looking at medical privacy that it’s not just the encryption of the content that matters, it’s also the metadata—who spoke to whom when. Obviously, if someone is exchanging encrypted emails with a psychiatrist, or with an HIV doctor, or with a physiotherapist, then that says something about them even if those emails themselves cannot be read.

So we started looking at the bigger picture. We started looking at things like anonymity and plausible deniability. And that, of course, is something that people in many walks of life actually want. They want to give advice without it being relied on by third parties.

Out of these political collisions and related engineering assignments, we began to get a much richer and more nuanced view of what information security is actually about. That was hugely valuable. Becoming involved in activism was something that paid off big time. Even though people like my dad will say, “No, don’t do that. You’ll make enemies,” it turned out in the end to have been not just the right thing to do, but also the right thing from the point of view of doing the research.
~ ~ ~ ~
Computing is different from physics in that physics is about studying the world because it’s there; computer science is about studying artifacts of technology, things that have been made by the computer industry and the software industry. If you work in computing, it’s not prudent to ignore the industry or to pretend that it doesn’t exist.

There’s a long Cambridge tradition of working with leading firms. The late Sir Maurice Wilkes, who restarted the lab after the war, consulted for Lyons and then eventually for IBM. My own thesis advisor, Roger Needham, set up Microsoft Research in Europe after he retired. I’ve worked for companies as diverse as IBM and Google, and I’ve consulted for the likes of Microsoft, and Intel, and Samsung, and National Panasonic.

This is good stuff because it keeps you up-to-date with what people’s real concerns are. It gets you involved in making real products. And as an engineer, I feel a glow of pride when I see my stuff out there in the street being used. Six years ago, I took some sabbatical time and worked at Google, where the bulk of my effort went into what’s now Android Pay. That’s the mechanism whereby you can pay using your Android phone to get a ride on the tube or to buy a coffee in a coffee bar.
Twenty-five years ago, in fact, I worked on a project where we were designing a specification for prepayment electricity meters. That may be the thing I’ve done that’s had the most impact, because there are now over 400 million meters worldwide using this specification. We enabled, for example, Nelson Mandela to make good on his election promise to electrify two million homes in South Africa after he got elected in 1994.

More recently, when I went to Nairobi a few months ago, I found that they’re just installing meters of our type. And now that they’re all out of patent, the Chinese manufacturers are stamping these out at ridiculously low prices. Everybody’s using them. That’s an example of how cryptographic technology can be a real enabler for development. If you’ve got people who don’t even have addresses, let alone credit ratings, how do you sell them energy? Well, that’s easy. You design a meter which will dispense electricity when you type in a twenty-digit magic number. The cryptography that makes that work is what I worked on. You can get your twenty-digit magic number if you’re in downtown Johannesburg by going up to a cash machine and getting it printed out on a slip and your account debited. If you’re in rural Kenya, you use mobile money and you get your twenty-digit number on your mobile phone. It really is a flexible and transportable technology, which is an example of the good that you can do with cryptographic mechanisms.
~ ~ ~ ~
If computer science is about anything at its core, it’s about complexity. It’s relatively straightforward to write short programs that do simple things, but when you start writing long, complex programs that do dozens of things for hundreds of people, then the things you’re trying to do start interacting, and the people that you’re serving start interacting; the whole thing becomes less predictable, less manageable, and more troublesome. Even when you start developing software projects that involve more than, say, a half-dozen people for a month or so, then the complexity of interaction between the engineers who are building the thing starts becoming a limiting factor.

We've made enormous strides in the past forty years in learning how to cope with complexity of various kinds at various levels. But there’s a feedback loop happening here. You see, forty years ago it was the case that perhaps 30 percent of all big software projects failed. What we considered a big software project then would nowadays be considered a term project for a half-dozen students. But we’re still having about 30 percent of big projects failing. It’s just that we build much bigger, better disasters now because we have much more sophisticated management tools....MUCH MORE