Covid Contact tracing apps are a complicated mess

[ Original @ Privacy International ]

As Governments, Apple and Google compete in proximity tracing, Privacy International  (PI) wrote a long read on building tech in times of crisis.

Key findings:

  • Contact tracing is an important emergency healthcare initiative, and is necessarily invasive.
  • Coronavirus Apps do much more and much less than manual contact tracing.
  • There are dangerous tracing initiatives that regularly report on your activities.
  • There are interesting initiatives that may or may not work. These are somewhat privacy-aware, and can be either centralised or decentralised.
  • Apple and Google support decentralised proximity tracing, in privacy- and security-aware ways.
  • Those who build these systems need to quickly build and engender trust. They’ve been failing in secrecy.

Covid Apps are on their way to a phone near you. Is it another case of tech-solutionism or a key tool in our healthcare response to the pandemic? It’s fair to say that nobody quite knows just yet.

Privacy International has been tracking these apps since the early days. They have been monitoring Apple and Google closely, have been involved in the UK’s app process, their partners in Chile and Peru have been tracking their governments’ apps, and more.

Of course privacy concerns arise. But only a simplistic analysis would position this as a privacy vs pandemic response. Any attempt to say that rights are in the way of an effective response to covid is hopelessly naive. Like with all the trillions spent on tech to combat terrorism, the reality is that the tech response to covid is a royal mess.

This is no suprise – it’s symptomatic of governments’ poor understandings of technology, and their hopes for an easy fix. Better technological systems can emerge when there’s careful scrutiny, but governments’ responses are being hamstrung by their pre-existing tendencies to secrecy, tech-enabled authoritarianism, and austerity. Covid was petrol added to those fires. And we’ve ended up with the tech we have.

But not everything is a disaster. Some tech solutions are horrible, some are pointless; often the implementation and context are everything. And to be very clear: we don’t know what will work.

This is a long read. Because the topic of apps deserves nuance rather than knee jerk reactions.

Here are the things this long read explores:

  • Testing and tracing are the foundations of an effective response.
  • It’s unclear if any of the data generated by mobile phones are of use; and they won’t be as reliable as manual contact tracing.
  • Making apps mandatory will increase inequality.
  • Centralised Bluetooth based systems may not work as well on Apple and Android devices, i.e. the entire relevant smartphone market.
  • Apple and Google chose a decentralised architecture, a decision that makes it difficult to turn their apps into quarantine enforcement tools.

Our core warning is this: we must build crisis-era tech with the presumption that it will be used in a country with weak rights protections, and that it will ultimately be used against the people it’s designed to protect.

Any tech that relies on promises of good intent will be, in the end, as broken as those promises if the crisis continues unabated.

Why contact tracing is concerning

What’s being attempted with these apps is the accumulation of data on every person you’ve interacted with and possibly every location you’ve been. This has never before been done on this kind of a mass basis: it’s never been recorded centrally by any government in history because it’s not been possible.

And yes, data brokers and social networks and telco operators know much of your business. But it’s why organisations like PI exist to hold those actors to account and ensure they don’t exploit this data. It also helps that their data is often unreliable and laws seek to restrain their abilities to mine data for profit.

No government has ever known as much about you as they may be about to.

Traditional contact is necessary, hard, and human

When there’s an outbreak, manual contact tracing is invaluable at some stages of response. It’s necessarily invasive, and labour intensive. Trained professionals would try to understand and address your concerns and do some analysis of your locations and your interactions over the period of contagiousness.

It’s a tried and true methodology. This captured data is about identifiable people. But it would be limited to what you know and generally to what you’re willing to share. And its purpose is limited to responding to a specific health emergency.

Not every country is resorting to an app to complement manual tracing. At the moment New Zealand, for instance, is emphasising manual contact tracing as it emerges from lockdown. We’ve also seen some analyses of China indicate that the traditional response of local action was integral to their response.

Finally, easy access to virus-testing appears to be a key and necessary component to a coherent response. Otherwise false positives, where people believe they have the virus but do not, could generate a lot of unnecessary contact-tracing, and in turn, people going into lockdown unnecessarily.

So now there’s an app for that: contact tracing apps

Could contract-tracing apps replace traditional manual contact tracing? No. Absolutely not, say nearly every public health expert.

In fact, we must start all discussions about apps with this foundation point: we don’t know if these apps will work or if they’re worth the effort. Any attempt to argue that they’re necessary invasions of privacy are far too premature. The results could be too noisy to be a useful surveillance tool or part of a reasonable health response.

But they may provide useful insight. Contact tracing is limited to what who you know and remember and are willing to share. These apps could potentially log every person, or their device… but only if their device meets a specific set of criteria; as yours must too. And if you chose to download the app, or were forced to. But wait, we’re getting ahead of ourselves.

Early solutions: try everything

Contact tracing apps come in many shapes and sizes. Early versions came in the form of apps that enforced quarantine.

China was an early mover on tech solutions. The various tools included an app that monitored locations; but also there was a lot of analysis of mobile telecoms data, with varying levels of success. Some apps also used an algorithm that would give you a score, that you couldn’t question, and which reported location data to the police.

South Korea also stretched the boundaries with claims that they are matching CCTV with other data sources to identify people for tracing. We look forward to seeing more analysis on how this was done across the board as this would be an extraordinarily complex exercise synchronising vast data sets.

Ultimately, however, both countries have advanced testing capabilities.

Planned authoritarianism: grab everything

When you’re a government that lacks imagination for inventive solutions, the first resort is to grab as much data as you can, just like you’ve dreamed of doing for years.

The governments of Israel, Kenya, and Turkey, among others, have grabbed telco data to look at people’s interactions, without limiting collection to people who are unwell. These are governments who covet surveillance, so of course they’d use a crisis as an opportunity to expand their powers.

Israel’s government handed all this data to its security services to do the analysis. Israel’s courts and parliament have responded and made it absolutely clear this is an unconstitutional power grab – but they shouldn’t have had to.

Panicked authoritarianism: mandatory

Beyond the bulk data grabs that are hidden from the sight of individuals, there are apps. These require individuals to take action, and for some governments this was also an opportunity for authority and enforcement – raising serious concerns about inequality.

India is a key example. The Indian government made its app mandatory for all people returning to work; some reports claim that it’s also necessary for people to take public transport.

India’s response is surprising. The aspiration for a mandatory smartphone app in a country with immense inequality is startling. How can you make something mandatory if not everyone has a smartphone or can afford to use it in accordance with government requirements?

The bluntness of this mandatory approach only makes it more perplexing considering the diversity of phones in India. Does the app work in the background or burn a hole into your battery power? Does it even record interactions or just make people think it works? It’s unclear how you enforce this mandatory requirement, unless we endorse searching people’s phones.

Like China’s, India’s app uses location data through GPS; it also uses Bluetooth. But this raises the question of accuracy: what is the data source you trust to automatically report on people’s interactions? GPS or telephone network data may actually not be as useful as you imagine. So finally we are at the apps in question today: Bluetooth low energy proximity tracking.

Centralised vs Decentralised voluntary Bluetooth systems

If your government has chosen to only use Bluetooth Low Energy proximity tracing then things are starting from a relatively less problematic place. While we still don’t know it’s going to work, the surveillance authoritarian dystopia is relatively harder to build upon this technology; but we must still be on close watch.

These apps still give rise to larger societal issues for which we’re unprepared. For instance, will your employer allow you to quarantine because an app on your phone says so?

So much focus has gone instead into whether these apps are part of a centralised or decentralised system.

Good people disagree about whether centralised systems or decentralised systems are better.

  • Centralised systems generally keep proximity data locally on the mobile phone until the person indicates that they are unwell and then the data is uploaded centrally; and they often are built on the assumption that testing isn’t done with ease, so then intelligence is used to discern whether and who to notify, which allows for more adaptations and learning. But, they could become data mines for exploitation.
  • Decentralised systems could also feed data centrally if you’re required to enter data manually into the app, and presume that access to tests isn’t a problem so contact notifications are only sent out with a positive test result; and generally keep proximity data on devices both before and after diagnosis and receiving notifications.
    There are significant efficacy and equity differences that may arise depending on which system in chosen.

Bluetooth LE is promiscuous – it can find and connect to many other Bluetooth devices. Generally phone operating systems limit its usage, because it creates risks for users – including being tracked by other people. In normal times no-one wants their local supermarket tracking their phone around the shop.

Unfortunately for governments building these apps, tracking is precisely what they want to use Bluetooth for, so they spent March and April trying to hack away at those protections. When they failed it required their app to run in the foreground: your phone had to be unlocked for their apps to work.

The cuddlier side of the U.K.’s intelligence/global spy agency thinks it’s found a slight hack into Apple’s iOS’ way of doing Bluetooth connections, so its app appears to work slightly better than many others but it’s still quite alarmingly unclear. For instance, our own tests couldn’t replicate what others had found.

Then it’s unclear what effect these have on battery life and whether they work on older phones. It’s a version of the India problem and it truly rears its ugly head if the app becomes necessary for access to testing, government services, or work.

In April, Apple and Google entered the fray, saying they would fix this problem in their operating systems by including a new interface for these apps to use to access Bluetooth LE more efficiently (an ‘API’). But in an unprecedented move, they said they would only do this for decentralised infrastructure: data would remain on the app, both for proximity tracing and when you are tested positive and need to notify others. The central government authority doesn’t get the automatically generated data of your interactions with others.

Recognising the value of having more effective apps, Germany switched over to a decentralised solution. There are rumours that Australia will too.

In the case of the UK app, designed while the government was flailing and failing badly at testing, the U.K. app stores connection data on the device unless you’re unwell – it then uploads information to a central repository to make a decision about whether to notify the devices you’ve interacted with.

The core decision is about whether or not to notify others. Centralised apps use some automated decision making to determine whether you’re truly at risk and then decide whether to notify the people, or phones, you’ve interacted with. The decentralised apps require an actual test result before sending those notifications.

Put backwards: decentralised apps presume a country is doing widespread testing for covid; centralised apps presume there’s still testing chaos.

Another core decision was how to treat identifiers. These apps all have some way of identifying people’s phones – essentially giving them names, so that each phone they connect with has a log of who they met.

The Apple/Google system requires apps modify these identifiers every fifteen minutes. This limits function creep – making it harder to repurpose the data to track you for other reasons. For example, an app that didn’t modify it’s identifiers frequently could help notify authorities if you’ve interacted with other devices at all.

The U.K. app changes these identifiers on a daily basis; whereas the Indian app doesn’t use dynamic identifiers yet. This matters — apart from trying to link an identifier back to a device, it allows the app to have multiple purposes. The U.K. app could identify that you’ve interacted with fifteen people in a given day, for example, and thus inform you that you’re not sufficiently locked down or quarantining; whereas under Apple/Google it could gather over 80 identifiers from a single other device in just one day, rendering this type of tracking pointless.

Though they don’t say this, there are good reasons for Apple and Google’s decisions around decentralisation, privacy and identifiers:

  • First, their solutions have to work in many contexts and countries.
  • Second, false positives would degrade people’s willingness to use the apps.
  • Third, it’s an unprecedented capability and to just hand this to any government would allow a new scale of horrific and authoritarian implementations.

But what about friendly democratic governments? Can’t the US, France and the UK get special access for their apps? We’ve heard these claims from many a policy-maker.

The answer must be no. This is the crypto wars all over again: a special key for some governments will be sought and used by others. Also France and the U.K. are special markets but not as important, lucrative, and influential as India, Turkey and China…. and many other countries with less than admirable surveillance regimes.

And let’s not pretend that France, the US and the U.K. are somehow good actors in the surveillance business. Their agencies have been prone to abuses in collection and exploitation, and PI is currently involved in litigation involving all their over-reaches. If only they could be trusted in this key health emergency but they’ve misbehaved in recent years in profoundly undemocratic and unlawful ways. With no reconciliation or learning.

It’s about trust: building crisis tech within crises

In late April to the surprise of many, Germany shifted from a centralised to a decentralised system.

For Germany the shift may be for efficacy; but last week computer scientists and epidemiologists alike issued a statement saying that these apps are a huge exercise in trust. And to engender trust safeguards are required. Decentralisation is one such safeguard but by no means the only one.

Decentralisation isn’t a sufficient protection against data exploitation. After all it’s possible to build a decentralised app that still requires people to upload a lot of intimate data for exploitation. And testing and manual tracing will still occur, hopefully, and involve a huge amount of data. Protecting against exploitation of that data is just as important, even while we accept it needs processing.

Generally our take is that we must approach these app initiatives with great care and deploy new exceptional rules.

These would include:

  • Limit by law and by design the number of purposes and uses for an app, eg an app we need to trust shouldn’t be used for quarantine enforcement.
  • Ask users to upload data sparingly and make it clear to them why each piece is important.
  • Delete the data once this pandemic is over.
  • Go further and delete the capability too: apps and APIs alike should be removed when this is done rather than try to find another purpose for them exist.

Otherwise function and mission creep will follow, as naturally as the night follows the day. Quarantining apps today will enforce house arrest tomorrow. Monitoring interactions and proximity today will be used on protestors and social movements tomorrow. Bluetooth data will be used for advertising and social graphs.

Let’s rewrite the book on emergency power enshrined in tech. ‘New rules for new times’, we are often told as governments seek their new powers and capabilities. We are saying it right back at them. Have the courage to make this the one time you live up to your promise to use extraordinary powers with restraint, and to destroy those powers when the emergency is over.

These tools mustn’t become part of our general response to challenges. We must commit to never using these tools or the data until another global emergency arises. Yes, these are extraordinary measure for extraordinary times. We need to make a commitment that it’s not the new normal.