Apple and Google’s bluetooth contact tracing API: impressive example of privacy-preserving features

The Apple & Google privacy-preserving contact tracing has no additional (privacy) cost.
It may have limited to no benefit because of low adoption and other issues.
It may have big benefit due to keeping the curve flat, while relaxing the physical distancing requirements.
It is the only one that has a chance to work because of its possible adoption and practical addressing of BTLE items.
No costs but potential (big) benefits = I vote we do this.

In case you are coming to this without previous context, this is an informal write up about the Apple & Google privacy-preserving approach, and why I wholeheartedly support it.

Compared to my normal bio-hacking ancient “woo-woo” practices with technology and dorking around with a high speed camera, this is the closest I’ve come to putting my work background on this blog.

I’ve put it here because I’d like to both show how a professional paranoid like me analyses this (in the hope you can learn that trick too), as well as hopefully counter some of the Fear Uncertainty and Doubt that comes up around tracing technology.
After all, I do think this is an excellent piece of engineering and policy decision from Apple and Google.

TL;DR version

The way I see it:

No costs but potential (big) benefits = I vote we do this.

How professional paranoids think

It is relatively easy to go into a paranoid mindset, just assume everyone is out to get you. I know, I’ve been doing that for decades, mostly professionally.
What distinguishes amateur and professional paranoids like me, is that professionals know when to stop worrying and doubting.

One of the things we do for this in my professionally paranoid world, is to think about the goal and capabilities of the attacker.
We think of things the attacker could do in terms of: “An attacker wanting <intent of the attacker> with <capability of the attacker> would <what the attacker achieves>”.
We call this the “threat model” or “security problem definition” in Common Criteria.

When this idea of phone based contact tracing was started (and to be overly clear: the Apple & Google privacy-preserving approach does not have these, which is why am supporting it), these were the kind of threats I thought:

  1. An attacker with access to the central server, would see who I have met, when and where. I.e. this can be used to map my social interaction graph by Apple/Google/government.
    (Not the case because the phone only sends the daily tracing keys to a semi-centralised server after I’m declared sick. And even then, these keys and the derived 10-minute pseudonym numbers are not linked to an identity. And even in that case, your phone determines if you were near that pseudonym, not the central server. Your phone doesn’t gain any information about me, just has a bunch of pseudonyms without attached identities and it determining it was near at a certain time.)
  2. An attacker able to eavesdrop the entire internet, would see who I have met, when and where. I.e. the above one on NSA scale.
    (Not the case because the phones just don’t transmit that information. An all mighty eavesdropper might know a bit more about me, and would be able to couple those facts to me being declared sick and uploading my daily tracing keys to the semi-central server. But that isn’t an added risk due to this system, this is the risk of worldwide surveillance by both the government agencies and commercial companies. They would already know you called a doctor…)
  3. An attacker able to eavesdrop an area, would know when I declare I am infected, that I was in that area, including when and ‘how far’ from the eavesdropping station.
    (Not really a case, as this is the same as say a shopkeeper’s phone doing this. Arguably this is a good thing: one could know where in the area extra cleaning might have been applicable.)
  4. An attacker able to eavesdrop all bluetooth transmissions over the whole world, would see all connections. I.e. the above one on illuminati scale.
    (Not the case because what they would see, is some blinks of ±10-minute ‘identities’ move around. Really not useful, but in any case this is not more information: any phone already sends a Bluetooth and WLAN MAC code that is unique. These MACs are in modern phones already randomised every ±10 minutes for exactly this tracing reason. One of the things I found clever, is that the tracing pseudonym and the bluetooth MAC are varied at the same time, thus one can not use one to link it to the other.
    Of course, if you disclose the daily tracing keys, the rolling proximity identifiers are now grouped for the same day. So this all powerful eavesdropper would know that a pseudonym infected person walked where. Which is exactly what contact tracers are doing manually without this process. So I consider this a feature.
    In the end this would be an attack for ‘the last mile’ location tracking: just to function on cellular network level, every mobile phone is still sending its unique identifier (IMSI and related values) to the mobile network, so at least on the granularity of mobile network cells the mobile networks know where that phone is.)
  5. An attacker with physical possession of my phone would be able to force me to show who I met. I.e. evil secret police forces me to show my co-conspiring cuddling group.
    (Not the case because my phone does not know this. It only knows those random pseudonyms. Actually is the above case.)
  6. An attacker would force me and you to show that we were close to each other. I.e. police investigation into me and an already suspected other, like you, for doing an unauthorised cuddling, or worse.
    (Somewhat possible, but with major limitations: they would have to force my phones to declare me are sick (which is usually illegal), they would then have to wait at least a day (because both phones only disclose tracing keys at least one day old), they would only be able to go back 15 days (again: phones), and then they would at best get that your phone says you were potentially exposed at a certain time it saw an ‘infected’ pseudonym. But still no confirmation it was me, just a suspicion it was me that they can’t use in court. And then we’re back to their original suspicion anyway.)
  7. An attacker with significant legal or informal power, forces me to declare I’m sick and then my app transmits “I am a leper” code for everyone to shun me.
    (Not possible, because since version 1.1 of the specification, the phone will not disclose the current day’s tracing key, only yesterdays onwards, with a max of 15 days. This is one of the improvements I found very clever.)
  8. An attacker forces me to show I am not infected/infectious, i.e. the green/yellow/red QR code apps being deployed.
    (This is independent of this proposal. The Google&Apple API does not help this, it actually seems to go out of its way to hinder this.)
  9. An attacker generates false ‘infections’ and causes many to be pseudo-infected. I.e. a cyber-bio-terror attack: I put my ‘phone’ with a strong Bluetooth amplifier near any place of gathering, then declare myself sick, and everyone who was in that place of gathering shows up as being ‘near’ to an infected person.
    (This is possible with any approach. I suspect this is one of the reasons why sane contact tracing apps will require some medical confirmation that one actually is infected. And that part then falls under medical safety and privacy as usual.)

So… long but hopefully insightful story on how someone like me looks at this mechanism and determines the risk/benefits.

Some of the underlying tricks I use to stop worrying:

  • Already accepted risk: if we end up in a situation we already accepted (here for example: the mobile phone could track me, regardless of this proposal), I remind myself why I accepted the original situation, quickly check if things really changed or not, and if not shrug and accept this too.
  • No additional risk: when we reach a point where we are assuming that the attacker already got what he is aiming for, to get it, I stop. Obviously an attacker who can hack the whole phone OS and hardware, can get it to do more than it should.
    But that attacker then doesn’t need this mechanism.
    So… probably that attack isn’t useful to the attacker.
  • No additional information: I’m keeping in mind what all parties could possibly know (this is an application of ‘belief logic’). If the attacker does something, but he does not gain knowledge, it is unlikely to be useful.

I hope you find this helpful.

With kind regards,
Wouter

P.S. (2020-06-14) There are quite a few other analyses out there, which I really like, but dear me a lot of them are unprofessional FUD.
For example Mind the GAP: Security & Privacy Risks of Contact Tracing Apps claims that:

  • “We demonstrate that in real-world scenarios the current GAP design is vulnerable to (i) profiling and possibly de-anonymizing infected persons,”: well no, not more than you already can, see my #3 and #4,
  • “and (ii) relay-based wormhole attacks that principally can generate fake contacts with the potential of significantly affecting the accuracy of an app-based contact tracing system.”: yes, but any false claim of connections would work, this has nothing to do with the technology, see my #9.

P.P.S (2022-08-26) The protocols held for privacy and there were only a few practical issues (ACM had a good overview), but mostly the adoption by the public was too low and the health organisations’ testing and contact tracing was established way too late to be of help before COVID went beyond containment.