Chapter 10

Biometrics and The Future

Abstract

This chapter discusses biometrics, and how fingerprint and eye scanners, and facial recognition technology is adding to the ways in which citizens can be identified and monitored without their consent. It also summarises what the book has covered, draws conclusions and makes recommendations to help alleviate some of the privacy concerns written about in both this, and previous chapters

Keywords

Biometrics
Law
privacy
EU
ICO
Google
Facebook
Twitter
internet
security
government
Acxiom
data
Imagine your personal data as a chest full of treasure; gems, jewellery gold coins and whatever else floats your boat. Now imagine that you keep this treasure in a well-fortified castle, high up on a hill surrounded by thick walls. So far so good. Now imagine the door. Tall and sturdy, made of solid oak strengthened by thick iron supports. The lock, however, is a flimsy little thing made of plastic which breaks when you give the door a decent shove. That’s about how effective passwords are at protecting your data.
In several chapters in this book we’ve discussed the inadequacies of the password as a means of authentication, and of protecting sensitive information. One of the main reasons for this fact is that humans are just so very bad at coming up with hard to guess words. Security applications vendor Splashdata compiles a list each year of the most common passwords in use, and the result is fairly depressing reading for anyone with an interest in security (and indeed anyone who values imagination). Here are the top five from this particular hall of shame. If you see a password you use yourself for anything you actually care about in this list, then you’d be well advised to change it immediately!
1. 123456
2. password
3. 12345
4. 12345678
5. qwerty
If this list was your sole piece of insight into the human race, you’d assume the species comprises a bunch of unimaginative, witless morons who probably deserve to have their data stolen.
And it’s not just consumers who are guilty of truly world-class uninspired password selection either. In June 2014 an ATM belonging to the Bank of Montreal was successfully hacked by two ninth Grade boys. That’s two fourteen-year olds outwitting the combined minds of a major bank’s security staff.
Matthew Hewlett and Caleb Turon discovered an old ATM operator’s manual online that showed how to get into the cash machine’s admin mode. During their school lunch hour one day, they went to an ATM belonging to the Bank of Montreal to try it out.
“We thought it would be fun to try it, but we were not expecting it to work,” Hewlett told local newspaper the Winnipeg Sun at the time. “When it did, it asked for a password.”
Expecting little, Hewlett and Turon tried a random guess at the six-digit password, using what they knew to be a common choice. Although the newspaper doesn’t report which password they tried, from what we know, it was likely to have been ‘123456’, since the more popular option ‘password’ has eight letters and is therefore too long. It worked, and they were in. The boys then immediately went into the bank itself to let staff know that their ATM security was less sophisticated than what many people employ on their laptops.
But the bank’s staff didn’t believe them, telling the boys that what they had done “wasn’t really possible.” So Hewlett and Turon went back out the ATM, changed its surcharge setting and amended its welcome screen to read ‘Go away. This ATM has been hacked.’
This bought them an interview with the branch manager, and a note for their school explaining that they were late back from lunch because they had been assisting the bank with its security.
At this point, most people will be simultaneously pleased that the vulnerability was discovered by two enterprising and ethical teens, rather than a hacker out to make a quick profit, and also hopeful that their own bank doesn’t require two fourteen-year olds to tell it how to secure itself properly.
One method of measuring password strength is by using the concept of ‘information entropy’. This scale is measured in ‘bits’, and doubles for each bit you add to it. So a password with six bits of strength would need two to the power of six (that’s 64, for those of you without easy access to a calculator) guesses before every possible combination of characters was exhausted – cracking a password by attempting all possible sequences is known as a ‘brute force’ attack.
So for each additional character you add to your password, the principles of information entropy dictate that its strength doubles. However, probability dictates that on average a brute force attacker will only need to try half of the possible combinations of characters in order to find the correct one. And in fact your password is likely to require even fewer guesses, since humans are so bad at generating truly random passwords. Even if you manage to avoid any of the options from the hall of shame, a study by Microsoft in 2007 which analyzed over three million eight-character passwords revealed that the letter ‘e’ was used over 1.5 million times, whilst ‘f’ was used only 250,000 times. If the passwords had been truly random, each letter would have been selected about 900,000 times. The study found that people were far more likely to use the number ‘1’ than other numbers, and the letters ‘a’, ‘e’, ‘o’, and ‘r’.
And an examination of the results of a phishing attack on users of early social network MySpace in 2006, which revealed 34,000 passwords, showed that only 8.3 per cent of people bothered to come up with mixed case passwords, or to use numbers of symbols.
And as if all of that doesn’t make it easy enough for hackers, anyone with access to your Facebook page and other social media probably knows your date and place of birth, where you live, your hobbies and the corresponding information relating to your immediate family too. Those details, coupled with a small measure of insight into human psychology will usually offer up your password in short order. And if not, there’s always the brute force option, or you can just google ‘password hacks’ and find something in the region of 13.9 million results including thousands of how-to guides.

Your Body as Your Password

So we can agree then that passwords aren’t great, so what are the alternatives? One which has been gaining popularity in recent years is biometric technology, where you literally are the password – or more specifically part of you. It could be a retinal scanner at an airport, or fingerprint recognition on your smartphone, in either case it’s considerably more secure than ‘123456’.
However, whilst the development undoubtedly has much to offer in terms of security, like all new technologies it is not without its controversies and risks.
The police in the US have been using biometric data for many years, in the form of ID cards and DNA fingerprinting for instance. On top of those techniques, since 2011 they’ve had the Mobile Offender Recognition and Information System (MORIS), a device which can plug into an iPhone, and is able to verify fingerprints and even irises. Meanwhile in 2014 the FBI announced that its ‘Next Generation Identification System’ had reached “full operational capability” in late 2014. This system contains data on fingerprints, palm prints, iris scans, voice data and photographs of faces.
And researchers at Carnegie Mellon University are in the final stages of developing a new camera that’s able to make rapid, high resolution iris scans of every individual within a crowd at a distance of ten meters.
Similar things have been happening in the UK. The British police have begun using facial recognition software to quickly scan through their databases of criminals and suspects. But not everyone in these databases is a criminal, some are people who have been brought in to a police station for questioning, but never actually charged with a crime.
Alan Miller MP, chair of the UK Parliament’s science and technology committee, warned the government of the risks.
“As we struggle to remember ever more passwords and pin numbers in everyday life, the potential benefits of using biometric technologies to verify identity are obvious. However, biometrics also introduces risks and raises important ethical and legal questions relating to privacy and autonomy.
“We are not against the police using biometric technologies like facial recognition software to combat crime and terrorism. But we were alarmed to discover that the police have begun uploading custody photographs of people to the Police National Database and using facial recognition software without any regulatory oversight—some of the people had not even been charged.“
The Information Commissioner’s Office, the UK’s privacy regulator, was asked to adjudicate, but ruled that no current legislation covers the use of photographs in this way. In effect, it’s a legal black hole and therefore the police’s activities are entirely within the law.
The UK even has a ‘Biometrics Commissioner’, who is charged with reviewing police retention and use of DNA samples, DNA profiles and fingerprints. The government website which describes his activities says the Biometrics Commissioner is independent, which one assumes means he doesn’t answer to the government. However, it also admits that he works “with the Home Office”, the central government department which administers affairs relating to the UK, so quite how independent he is we will only discover when he is called upon to rule in a matter relating to government handling of biometric data.
The fact that he has no remit over police use of photographs didn’t escape the notice of the Parliamentary science and technology committee, who recommended in December 2014 that the statutory responsibilities of the Biometrics Commissioner be extended to cover the police use of “facial images”.
Miller expressed his disappointment that the UK government has still failed to provide any leadership or guidance around biometrics, despite pledging to do so over two years ago.
”Management of both the risks and benefits of biometrics should have been at the core of the Government’s joint forensics and biometrics strategy. In 2013, my Committee was told by the Government to expect the publication of a strategy by the end of the year. We were therefore dismayed to find that, in 2015, there is still no Government strategy, no consensus on what it should include, and no expectation that it will be published in this Parliament.“

So What are the Risks?

This book has dealt extensively with government surveillance. It’s pervasive, invasive, and it’s not going away any time soon. But couple this obsession with surveillance together with biometrics, and the potential for privacy violations increases exponentially.
For example, the city of New York operates a network of around 3,000 cameras called the ‘Domain Awareness System’, essentially a CCTV network much the same as that used in London and many other cities. If a crime is committed and the police know roughly where and when it happened, they can scan through the relevant recording histories to find it.
But what if systems such as this were equipped with facial recognition technology? Anyone sufficiently motivated would be able very simply to track you throughout your daily routine.
“A person who lives and works in lower Manhattan would be under constant surveillance,” Jennifer Lynch, an attorney at the Electronic Frontier Foundation has been widely quoted as saying.
And this threat is fast becoming a reality. The Department of Homeland Security is working on a five billion dollar project to develop what it calls the Biometric Optical Surveillance System (which becomes no less disturbing when you refer to it by its acronym: BOSS). This system aims to be able to recognize people (and it is able to do so because organizations like the NSA have been harvesting people’s photographs for years and building vast databases) with 90 per cent certainty at a range of 100 meters, and it has been predicted to be operational by 2018.
Things become more worrying still once your DNA profile gets digitized. For one thing various commercial bodies including insurers will want to get hold of the data to scan your profile for risks and revenue-generating opportunities (‘Hi there, we’ve noticed that you have a genetic predisposition towards colon trouble, why not try our new herbal range of teas, proven to ease such complaints in over 80 per cent of cases’ is a fake, but yet disturbingly believable example of what could happen). Worse still, what if some government agency one day purports to have found a genetic sequence indicating a propensity towards crime?
Alternatively, what happens when a malicious party appropriates your genetic code? You can change your password, or the locks on your front door, but your DNA sequence?

The Future of Biometrics

Miller’s committee released a report which identified three future trends in the ways biometrics will be used. First was the expansion of what it termed “unsupervised” biometric systems, like fingerprint authentication on your smartphone; then the proliferation of biometric technologies which could be used to identify individuals without their consent or knowledge; and finally the linking of biometric data with other types of big data as part of the massive individual profiling efforts discussed at length in earlier chapters.

Mobile Biometrics

The first trend – mobile biometrics – is arguably where the area is currently seeing the most explosive growth. Submitting evidence to the committee, Dr Richard Guest, from the University of Kent, rated consumer-level biometrics, like the fingerprint scanner on Apple’s iPhone 6 “something of a gimmick value”. But actually it’s rather more than that. Sticking with the iPhone 6 (though other models, such as the Samsung Galaxy S6 also include biometric authentication), it enables financial transactions via its ‘Apple Pay’ system, which is authenticated via its biometric scanner. Similarly, Barclays Bank has announced that it will be rolling out a biometric reader in 2015 to replace the current PIN system for its corporate clients, to allow them to access their accounts online.
Biometric technology on mobile devices is still a relatively new concept, with only the latest smartphone models from the major manufacturers being fitted with biometric readers, and the practical applications at the time of writing are few. However when Apple first released the iPhone in June 2007 there were just a handful of apps available to users (admittedly largely because Apple was initially reluctant to allow third parties to develop software for the platform). Seven years later, by June 2014, there were over 1.2 million apps, and that figure is growing daily. The “unsupervised” possibilities of biometric technology on mobile devices is likely to be quickly exploited in ways its inventors could barely have imagined.
It has also, as of April 2015, already been hacked. Fortunately for users of the Samsung Galaxy S5 it was hacked by security researchers rather than a group with more nefarious aims (some would argue that some security researchers, with their habit of selling any flaws they discover to the highest bidder, whether that’s law enforcement, a hacking group or a shady government department, are nefarious enough by themselves). What they discovered was that a flaw in Android, the phone’s operating system, allows hackers to take copies of fingerprints used to unlock the device. They concluded that other Android-based phones could be similarly vulnerable.
This is especially alarming as fingerprints are set to become increasingly popular over the next few years as a way of authenticating financial transactions in particular, with the Apple Pay system, and a similar offering from Paypal.
This isn’t even the first time that a phone’s fingerprint scanner has been beaten by hackers, although it is the first way that has been found so far to steal biometric data from a mobile device. In a rather more prosaic hack, a German group the ‘Chaos Computer Club’ in 2013 used a photo of a person’s fingerprint on a glass surface to fabricate a fake finger that was successfully used to unlock a phone.
Some could be forgiven at this point for thinking that perhaps passwords aren’t so bad after all, but what this is really an argument for is two-factor authentication. Once hackers have to build a prosthetic finger AND guess your password, they’re going to have to really want to get at your data before they go to those lengths.

Clandestine Identification of Individuals

Biometric technology like facial recognition systems are able to identify individuals without their knowledge. In November 2013 supermarket giant Tesco announced that it would be installing screens positioned by payment tills which scan its customers faces, then display targeted advertising to them. Their cameras are able to work out customers’ age and gender, and the algorithm behind it all also takes into account time and date, and also monitors customer purchases.
Simon Sugar, chief executive of Amscreen, the firm behind the technology, said: “It is time for a step-change in advertising – brands deserve to know not just an estimation of how many eyeballs are viewing their adverts, but who they are too.”
His use of language is interesting; brands “deserve” to know who’s viewing adverts. Quite why they’re entitled to this information isn’t explained
“Yes it’s like something out of [2002 film] Minority Report, but this could change the face of British retail and our plans are to expand the screens into as many supermarkets as possible,” Sugar added.
Privacy campaign group Big Brother Watch described the potential for abuse as “chilling”.
“Should we really be increasing the amount of surveillance we’re under so some companies can sell more advertising?” it asked on its blog. “Secondly, the technology isn’t going to stay the same and be used in the same way,” it continued.
This is yet another step in the evolution of supermarkets’ desire to know everything about their customers. But how can they work out anything useful from a quick scan of our faces? Surely there must be a master database showing what we look like and who we are so these till scanners have something to find a match with – where does that information come from? The answer is, we give it away ourselves in our social media accounts. It’s trivial technologically to scrape a database together from Facebook, Twitter and LinkedIn, take people’s mugshots, names, and whatever else they make easily accessible (usually almost everything), then use facial recognition software to search for a match to the data streaming back from the in-shop scanner.
“Given the number of CCTV cameras across Britain [and many other parts of the world] that could be adapted to use this technology, the potential to track people in real-time is huge,” argues Big Brother Watch.
Whilst there was a brief media outcry after Tesco made its announcement, and whilst Facebook removed its own facial recognition data (used to automatically identify its users on photographs uploaded to the network) under pressure from regulators in 2012, most consumers remain relatively unconcerned.
The pattern that’s repeated itself in the last couple of decades is one of privacy being eroded in increments. What seemed outrageous a few years ago – like Facebook posts defaulting to publically visible where they had before been private – is now just expected.
“The fact shops feel they can scan you without your permission is a shocking indictment of how privacy is under attack in an unprecedented way,” continued Big Brother Watch. “Those who argue you have nothing to fear if you have nothing to hide may well think twice about the shops they visit, whether they seek sensitive medical and legal advice or what streets they walk down.
“People accept a degree of surveillance for law enforcement purposes, but these systems are solely motivated to watch us to collect marketing data. People would never accept the police keeping a real-time log of which shops we go in, but this technology could do just that. It is only a few steps short of a surveillance state by the shop door,” it concluded.
This covert identification of individuals, as mentioned earlier, is also used by police forces. A pilot project known as the ‘Neoface system’ being run by Leicestershire Constabulary uses a database of 92,000 facial images, which largely come from CCTV and police cameras. Commenting on the project in its evidence to the Parliamentary committee, the ICO explained that police biometric identification goes well beyond facial recognition. “The surreptitious collection of information about individuals that they would not necessarily expect” could also come from “a fingerprint or genetic material left behind”, and not just from “facial recognition in live or recorded images,” it stated.

Linking Biometric Data

An earlier report from the science and technology committee entitled ‘Responsible Use of Data’ covers the UK government’s work with the Economic and Social Research Council’s Administrative Data Research Network to “facilitate access to, and linkage of, de-identified administrative data routinely collected by government departments and other public sector organizations.”
The UK government is especially keen on “joining the dots”, as civil service blog from late 2014 calls it, linking disparate datasets together and coming up with consensus on common identifiers, so if you want to tell people that your data relates to the Empire State Building, you can do it in a commonly understood way that easily links to other data on the same subject.
“Our vision is that anyone should be able to discover and link together related sources over the web,” writes the cabinet office, effectively the UK government’s corporate HQ, in a blog. “For example, DCLG [Department for Communities and Local Government] wants to develop smarter ways of joining-up disconnected data on housing, schools, parks, and retail facilities – empowering people to make more informed choices about where they want to live. We are doing this by publishing our data as Linked Data. These sources could be open data, linked over the public web, or could equally be private information shared in a more secure and protected environment,” it states.
All of which sounds perfectly noble and reasonable. But Professor Louise Amoore from Durham University, giving evidence to the committee, gave her view that the likely future trajectory was moving towards “the integration of biometric data” into a “much larger and rapidly growing array of digital big data” in ways that were “capable of producing profiles or behavioral maps of individuals and groups”. Amoore’s views were echoed by the British Standards Institution which predicted that the identification of individuals would “be possible using a wider range of non-traditional biometric data sets and… by combining data sets using ‘big data’ approaches”.
This is possible because there are currently no meaningful regulations in place to limit the collection and sharing of certain biometric data, including facial recognition.
Amoore went so far as to suggest that analytics could even use the linkages between biometric and other easily accessible types of data to understand and predict a person’s behaviour.
“[There are] analytics engines that can mine biometric data that is available on the internet, and link that to other forms of data,” stated Amoore. “That moves us more in the direction of indicating not just who someone is but suggesting that one might be able to infer someone’s intent from some of the biometric data.”
Dr Richard Guest from the University of Kent stated that the ‘Super-Identity Project’ (a trans-Atlantic project funded by the Engineering and Physical Sciences Research Council examining the concepts of identity in both the physical and cyber world) had proved that biometric data could be linked with “cyber activity and personality assessment” data in such a way that made it possible to obtain “unknown elements of identity from known elements”.
In other words you start with a photo of someone’s face, and quickly end up with their name, address, television viewing habits and favourite brand of cereal (and much, much more).

The Solution

So that’s all very well but what can be done about it? We’ve explored the issues of big data in earlier chapters, but given that reams of our personal data has already been harvested, catalogued, packaged up and sold on, it’s very unlikely at this point that we’re going to convince data brokers and marketers to stop using it or delete it entirely. And given that all of this is already out there, how can we govern how it interacts with biometric data which is now increasingly flooding into servers all over the world from our smartphones, street cameras and even shopping tills?
Big Brother Watch suggests that biometric data should fall under the same guidelines as the UK National DNA database (a system set up in 1995 which carries the genetic profiles of over six million people, with samples recovered from crime scenes and taken from suspects). Broadly, these guidelines dictate that DNA profiles of anyone convicted of an offence can be stored permanently, but those taken where no conviction follows can only be stored for up to six months.
Until recently, innocents’ DNA profiles could be legally stored for six years, but the Protection of Freedoms Act 2012, which came into force in the UK on 31st October 2013, dialed that back significantly. Since then, the National DNA Database Strategy Board stated in its annual report for 2014 that almost 8 million DNA samples had been destroyed in its efforts to comply with the new legislation.
Big Brother Watch compares the rules for the DNA database with the current system around biometric data. Currently biometric data stored by the government for ‘national security determinations’ can be kept for two years, but with the potential for indefinite renewal (which renders the initial two year time limit utterly meaningless).
“This is grossly excessive and judging from past cases of how anti-terrorism legislation has been applied it is far from certain that it will be limited to cases of credible threat to national security,” the group states on its blog.
Its other proposal is rather more general, and would help to safeguard all personal information.

Safeguarding Privacy in The Age of Biometrics and Big Data, and Other Impossible Tasks

Big Brother Watch recommended that the UK introduce custodial sentences for serious breaches of the Data Protection Act 1998. This would even be quite simple to achieve; under Section 77 of the Criminal Justice and Immigration Act 2008 a Secretary of State can implement a custodial sentence of up to two years for a serious breach of the Data Protection Act.
“No new primary legislation would be required and it would send a clear message that the government takes the security of personal information seriously.”
The current law is defined by Section 55 of the Data Protection Act, which states that it is generally unlawful for a person to “knowingly or recklessly without the consent of the data controller obtain or disclose personal data or the information contained in personal data, or procure the disclosure to another person of the information contained in personal data” without the consent of those who control the data.
But rather than jail time, the current penalty for committing an offence under Section 55 is a maximum £5,000 fine if the case is heard in a Magistrates Court and an unlimited fine for cases tried in a Crown Court.
The introduction of custodial sentences for this type of offence has reared its head many times in the UK. The last proposal came from Lord Marks of the UK’s House of Lords in late 2014.
“To put it bluntly, the threat of fines is frequently insufficient as a punishment,” Lord Marks said during the Lords debate. “There is a risk that payment of fines may be regarded and treated as no more than a necessary expense by unscrupulous publishers who act with intent to circumvent the Data Protection Act.”
However, the proposal failed. In fact, it was never even put to a vote in order to progress through to the House of Commons. Rather oddly, Lord Marks withdrew the proposal when other members of the House of Lords criticized his timing.

Conclusions

So what have we learnt so far? The main key takeaway which summarizes in just a few words what this book is attempting to say, is that the very concept of privacy is under threat from recent (and some not so recent) developments in technology. Several of the world’s largest and richest governments have been proven to have been snooping on their citizens, and indeed anyone unfortunate enough to have their data pass through various breached servers, cables and entire networks. What Edward Snowden’s leaked trove of documents showed us was the dizzying scale of the espionage, and the almost total absence of governance and oversight around it. Where is the judicial review of security agency requests to examine data belonging to groups and individuals? Where are the rigorous governing bodies holding these agencies to account for breaching their own rules? Where are the high profile casualties – the senior heads rolling for their lack of respect for the privacy of their citizens? They are all lacking.
And the most frustrating part (at least for this commentator)? The fact that the news breaking out and becoming common knowledge has changed almost nothing. It’s still all going on, right under our noses.
But that is far from the full picture. Private firms are playing very much the same game, snooping, harvesting and sucking up as much data about us as they possibly can. Often this is with the intention of turning an indirect profit. Supermarkets want to know more about us than we know ourselves so they can convince us to spend more money in their stores by targeting us with very specific advertising. Diaper offers before we’ve even told our families that we might be expecting a new arrival, to cite one famous example. Media outfits want to profile us to within an inch of our lives so they can show their own advertisers the types of people visiting their pages, because sponsors will pay more for a certain number of the ‘right’ viewers, than a larger number of anonymous unknowns. And then there are other firms turning our data into a direct profit. Data brokers like Acxiom, who gather and store detailed profiles on almost every adult in the UK and US alone, then sell that data on in a business model worth hundreds of millions of dollars (and growing), absolutely none of which goes back to the people whose privacy has suffered in the collection of the data in the first place.
Most of us are also guilty of giving highly personal data away freely, with little or no care for the risks or consequences. Most people are aware that Google, and operators of other search engines, are not charitable organizations, and that since their services like web search are largely free, somewhere along the line there must be a catch. But that’s usually as far as the thinking goes, and so they are deemed to consent to the subsequent information pilfering. However, when directly shown that everything they type into a search engine, every website they visit and how long they spend there, and everything they write or even receive in an email is stored and analysed and used for profit, most people express shock and even outrage.
Personal data is given up with if anything even greater abandon to firms like Facebook, Twitter and other social media outfits. And this is far from low value information, but intimate details of our lives: where we live, work and shop, our daily routines, where, when and how we travel, what we buy, and even, in the form of photography, what we and our children look like.
Even more intimate information is gathered up and stored by our hospitals and other care providers, but not exclusively to increase the quality of our healthcare. Both the US and UK are guilty of serious breaches of patient trust, with a profiteering approach to patient data that has quite rightly been met with scorn and disbelief on both sides of the Atlantic.
The narrative follows a familiar theme in the world of mobile, where free apps like Angry Birds are downloaded by hundreds of millions of people, the vast majority of whom will not read the terms and conditions, and therefore be completely unaware that their mobile devices are scanned, their contacts and sometimes even personal messages read and sold on to advertisers.
And even our cities themselves are spying on us, with an unimaginable number of sensors, cameras and other devices pinging our phones, and monitoring and tracking us as we go about our lives. The simple act of visiting your local store to buy a pint of milk could result in dozens of new additions to your data doubles, residing in cyber space and ever-evolving without your knowledge or consent. Here’s a brief list of some of the privacy violations this act could instigate:
1. Logged on CCTV on the street. Image scanned and matched with photograph scraped from social media.
2. Phone logged by smart sensor embedded in lamp post.
3. Travel card scanned accessing public transport.
4. Phone logged again by sensors in shop. MAC address matched with customer profile from previous visits. Route around shop monitored.
5. Purchase logged and added to customer history. Profile updated.
6. Face scanned at till. Identity matched with photo from database.
7. Profile updates and transaction information packaged and sold on to advertisers.
None of these actions improve our lives or enrich us to any great degree. It’s also worth noting that the above list is what might happen without the person in question deliberately interacting with a connected device in any way. Perform a web search to check on the store’s opening hours, or update your social network on the way and the list grows significantly.

So We Should All Go Live in a Cave?

So what can we actually do to improve things? The good news for Europeans is that the new data protection Regulation and Directive, which should hopefully creak into force some time in 2018 if the lawmakers manage to remember their purpose, will go some way towards helping the situation. Although final negotiations are ongoing at the time of writing, the new rules will force firms to treat data security more seriously, in part by increasing the financial penalties that regulators are able to impose upon them. However, this increase is tempered by the fact that those same regulators may find it harder to enforce data protection legislation given that their governance responsibilities (basically checking up on potentially dodgy firms) could see a huge increase in workload without a corresponding increase in their budgets.
So we can’t rely on the law to resolve the situation. And the industry is unlikely effectively to police itself. Although Acxiom has made a positive gesture by making some of the information it holds on US citizens accessible to them, in truth it’s such a small subset of the data it holds as to be meaningless, and it’s hard to escape from the cynical view that it’s more an attempt to deter regulators from imposing tighter controls than it is a genuine revolution in commercial responsibility.
Where does that leave us? The answer is, it leaves us in part needing to fend for ourselves. But maybe that’s not such a bad solution. We have shown alarming disregard for our online privacy and security, and no amount of legislation nor even goodwill from corporations is going to protect us when we can barely lift a finger to protect ourselves. There needs to be a cultural shift towards personal responsibility for data, only then will we see some progress in the battle to preserve privacy in the big data age. And that means people stop using things like ‘password’ and ‘12345’ for their passwords, and instead start using technologies such as two-factor authentication for access to anything remotely important, which includes personal email, not just internet banking.
And this notion of responsibility extends to social media use too. Social networks need to evolve to provide clear indications of their default privacy settings, including notices about the potential dangers of uploading geo-tagged photos, and telling everyone in the world that you’re off on holiday for a fortnight and your house is going to be unoccupied. This isn’t a plea for governments to become nannies, but rather a call to help consumer understanding of what’s really happening with their data. If people choose to share their every intimate detail with the wide world, then that should be permitted, but it should be done with a full understanding of the consequences. And that rule applies to everything described in this book. Consumers need to fully understand what they’re getting in to, only then can they provide an informed consent.
But these changes won’t be sufficient in isolation, governments and legislators aren’t off the hook. If individuals are solely responsible to protect themselves, then privacy will become something only for the privileged few, those who know better, and know how the system works.
Pressure groups like Liberty and Privacy International have shown that it is possible to incite change, with their February 2015 victory in the case to prove that some of GCHQ’s mass surveillance activities were unlawful. With privacy-first services like Ello being formed, some at least are now viewing it as a selling point, and that will help privacy to start being built in to systems by default. These are small, but important steps.
But there is more that commercial and public bodies should be doing. Whilst we can never expect profiteering corporations to put individual privacy first, there are some basic rules which should be followed:
Privacy must be built in to new systems and tools, not added as an afterthought. That means clearly identified options with reasonable explanations (no more Hamlet-length terms and conditions), and the possibility to opt out of all or parts of a service based on a thorough understanding of what will happen to private data at each step. Consumers should also have the means to check what has happened to their data later, together with a simple means to hold that body accountable should they subsequently learn that promises have not been kept.
Organizations must be legally obligated to collect only the minimum data necessary for each specific task. That data must be held for the minimum duration necessary for that task, and the transfer of that data between different systems must also be kept to a minimum. Furthermore, access to that data should be restricted to as few parties as possible.
There needs to be a widespread understanding that individual can very often be identified from anonymized and pseudonymized data. This means that it should be treated as identifiable data, with the same safeguards. Aggregated data is the only truly anonymous type of data we have available today.
A system of ethics should be accepted and agreed as the baseline expectation for the ways in which private data will be treated. This includes thinking through what will happen in future to data, and what could possibly go wrong. “Monitor for unintended consequences and be prepared to act to set them right,” as Gartner’s Buytendijk said.
Finally, as Paul Sieghart concluded at the end of ‘Privacy and Computers’, we must accept that “No system of safeguards will ever be perfect.”
And that seems a fitting place to end. No system is ever going to be perfect. Data breaches and leaks will continue. Privacy intrusions will persist. But if we can change the accepted societal norms back to where they arguably used to be, where minimal data sharing is the default position and anything else comes with a big red flag attached to it explaining the situation, then we’ll be in a better, safer position in future.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.40.207