Apple and Google’s bluetooth contact tracing API: impressive example of privacy-preserving features

The Apple & Google privacy-preserving contact tracing has no additional (privacy) cost.
It may have limited to no benefit because of low adoption and other issues.
It may have big benefit due to keeping the curve flat, while relaxing the physical distancing requirements.
It is the only one that has a chance to work because of its possible adoption and practical addressing of BTLE items.
No costs but potential (big) benefits = I vote we do this.

In case you are coming to this without previous context, this is an informal write up about the Apple & Google privacy-preserving approach, and why I wholeheartedly support it.

Compared to my normal bio-hacking ancient “woo-woo” practices with technology and dorking around with a high speed camera, this is the closest I’ve come to putting my work background on this blog.

I’ve put it here because I’d like to both show how a professional paranoid like me analyses this (in the hope you can learn that trick too), as well as hopefully counter some of the Fear Uncertainty and Doubt that comes up around tracing technology.
After all, I do think this is an excellent piece of engineering and policy decision from Apple and Google.

TL;DR version

The way I see it:

No costs but potential (big) benefits = I vote we do this.

How professional paranoids think

It is relatively easy to go into a paranoid mindset, just assume everyone is out to get you. I know, I’ve been doing that for decades, mostly professionally.
What distinguishes amateur and professional paranoids like me, is that professionals know when to stop worrying and doubting.

One of the things we do for this in my professionally paranoid world, is to think about the goal and capabilities of the attacker.
We think of things the attacker could do in terms of: “An attacker wanting <intent of the attacker> with <capability of the attacker> would <what the attacker achieves>”.
We call this the “threat model” or “security problem definition” in Common Criteria.

When this idea of phone based contact tracing was started (and to be overly clear: the Apple & Google privacy-preserving approach does not have these, which is why am supporting it), these were the kind of threats I thought:

  1. An attacker with access to the central server, would see who I have met, when and where. I.e. this can be used to map my social interaction graph by Apple/Google/government.
    (Not the case because the phone only sends the daily tracing keys to a semi-centralised server after I’m declared sick. And even then, these keys and the derived 10-minute pseudonym numbers are not linked to an identity. And even in that case, your phone determines if you were near that pseudonym, not the central server. Your phone doesn’t gain any information about me, just has a bunch of pseudonyms without attached identities and it determining it was near at a certain time.)
  2. An attacker able to eavesdrop the entire internet, would see who I have met, when and where. I.e. the above one on NSA scale.
    (Not the case because the phones just don’t transmit that information. An all mighty eavesdropper might know a bit more about me, and would be able to couple those facts to me being declared sick and uploading my daily tracing keys to the semi-central server. But that isn’t an added risk due to this system, this is the risk of worldwide surveillance by both the government agencies and commercial companies. They would already know you called a doctor…)
  3. An attacker able to eavesdrop an area, would know when I declare I am infected, that I was in that area, including when and ‘how far’ from the eavesdropping station.
    (Not really a case, as this is the same as say a shopkeeper’s phone doing this. Arguably this is a good thing: one could know where in the area extra cleaning might have been applicable.)
  4. An attacker able to eavesdrop all bluetooth transmissions over the whole world, would see all connections. I.e. the above one on illuminati scale.
    (Not the case because what they would see, is some blinks of ±10-minute ‘identities’ move around. Really not useful, but in any case this is not more information: any phone already sends a Bluetooth and WLAN MAC code that is unique. These MACs are in modern phones already randomised every ±10 minutes for exactly this tracing reason. One of the things I found clever, is that the tracing pseudonym and the bluetooth MAC are varied at the same time, thus one can not use one to link it to the other.
    Of course, if you disclose the daily tracing keys, the rolling proximity identifiers are now grouped for the same day. So this all powerful eavesdropper would know that a pseudonym infected person walked where. Which is exactly what contact tracers are doing manually without this process. So I consider this a feature.
    In the end this would be an attack for ‘the last mile’ location tracking: just to function on cellular network level, every mobile phone is still sending its unique identifier (IMSI and related values) to the mobile network, so at least on the granularity of mobile network cells the mobile networks know where that phone is.)
  5. An attacker with physical possession of my phone would be able to force me to show who I met. I.e. evil secret police forces me to show my co-conspiring cuddling group.
    (Not the case because my phone does not know this. It only knows those random pseudonyms. Actually is the above case.)
  6. An attacker would force me and you to show that we were close to each other. I.e. police investigation into me and an already suspected other, like you, for doing an unauthorised cuddling, or worse.
    (Somewhat possible, but with major limitations: they would have to force my phones to declare me are sick (which is usually illegal), they would then have to wait at least a day (because both phones only disclose tracing keys at least one day old), they would only be able to go back 15 days (again: phones), and then they would at best get that your phone says you were potentially exposed at a certain time it saw an ‘infected’ pseudonym. But still no confirmation it was me, just a suspicion it was me that they can’t use in court. And then we’re back to their original suspicion anyway.)
  7. An attacker with significant legal or informal power, forces me to declare I’m sick and then my app transmits “I am a leper” code for everyone to shun me.
    (Not possible, because since version 1.1 of the specification, the phone will not disclose the current day’s tracing key, only yesterdays onwards, with a max of 15 days. This is one of the improvements I found very clever.)
  8. An attacker forces me to show I am not infected/infectious, i.e. the green/yellow/red QR code apps being deployed.
    (This is independent of this proposal. The Google&Apple API does not help this, it actually seems to go out of its way to hinder this.)
  9. An attacker generates false ‘infections’ and causes many to be pseudo-infected. I.e. a cyber-bio-terror attack: I put my ‘phone’ with a strong Bluetooth amplifier near any place of gathering, then declare myself sick, and everyone who was in that place of gathering shows up as being ‘near’ to an infected person.
    (This is possible with any approach. I suspect this is one of the reasons why sane contact tracing apps will require some medical confirmation that one actually is infected. And that part then falls under medical safety and privacy as usual.)

So… long but hopefully insightful story on how someone like me looks at this mechanism and determines the risk/benefits.

Some of the underlying tricks I use to stop worrying:

  • Already accepted risk: if we end up in a situation we already accepted (here for example: the mobile phone could track me, regardless of this proposal), I remind myself why I accepted the original situation, quickly check if things really changed or not, and if not shrug and accept this too.
  • No additional risk: when we reach a point where we are assuming that the attacker already got what he is aiming for, to get it, I stop. Obviously an attacker who can hack the whole phone OS and hardware, can get it to do more than it should.
    But that attacker then doesn’t need this mechanism.
    So… probably that attack isn’t useful to the attacker.
  • No additional information: I’m keeping in mind what all parties could possibly know (this is an application of ‘belief logic’). If the attacker does something, but he does not gain knowledge, it is unlikely to be useful.

I hope you find this helpful.

With kind regards,
Wouter

P.S. (2020-06-14) There are quite a few other analyses out there, which I really like, but dear me a lot of them are unprofessional FUD.
For example Mind the GAP: Security & Privacy Risks of Contact Tracing Apps claims that:

  • “We demonstrate that in real-world scenarios the current GAP design is vulnerable to (i) profiling and possibly de-anonymizing infected persons,”: well no, not more than you already can, see my #3 and #4,
  • “and (ii) relay-based wormhole attacks that principally can generate fake contacts with the potential of significantly affecting the accuracy of an app-based contact tracing system.”: yes, but any false claim of connections would work, this has nothing to do with the technology, see my #9.

P.P.S (2022-08-26) The protocols held for privacy and there were only a few practical issues (ACM had a good overview), but mostly the adoption by the public was too low and the health organisations’ testing and contact tracing was established way too late to be of help before COVID went beyond containment.

RFID blockers have very limited use

I keep seeing these “RFID blockers”.

Anti RFID skimm device
Anti RFID skimm device

RFID blockers improve your security only a bit as contactless skimming is a high-risk/low-reward attack for the attacker: Contactless credit cards and electronic IDs reading distance is max ±25cm so about ±10 inches in lab conditions with 500+Watt amplifiers (this is the kind of power that causes sparks to fly!).
If you can get reading distance up to half that reliably in real world situations, you we can make a lot of money selling your skills in the reader market.

For this attack to work, someone needs to be basically rubbing up to you to talk to your card.
In the case of electronic IDs (eIDs) like passports, driver’s licenses, ID cards, that follow international ICAO norms (i.e. any European one, Americans only since a few years), that still doesn’t get the attacker anything: to talk to the chip requires some information from the front side of the card: the 3-4 lines of computer-readable text at the bottom of your passport. (In case you want to know, this is called the Machine Readable Zone (MRZ)).
Basically: you need to optically read the eID before you can electronically read it, i.e. you are already handing it over to them (ID checks at airports, rentals, hotels).

In the case of contactless credit cards, the story is a bit more complex, as it depends on what your issuing bank has configured the card for (they have a dozen or so parameters they can choose).
In general, transactions up to $25-ish to an overal total of $150-ish don’t require a PIN (for the tap-and-go payment of coffees).
As the electronic transactions with these credit cards are one-time and only-with-that-shop, a pair of attackers would need to pull of the following to pay with your card, in what I call a “contactless extension cord” attack or is often called “virtual pickpocketing”:

  1. Attacker A dry-humps you to get his card reader within those ±5 inches of your card, and
  2. Attacker B, at that exact same time, is physically at a shop with a card emulator, and about to pay to max that limit that we are talking about (i.e. max 5x$25-ish product).

This exposes both Attacker A and B to being physically caught, for $25-150 of stuff that still needs to be fenced at a much lower return value.
There are lower-risk and higher-reward kind of attacks you can do as a criminal :-). 

That said, if you want to get protection, consider adding a layer of aluminum foil in your existing wallet (reduces the read distance to 1-2 inches) or combine with the practicality of a compact wallet like Secrid.

With kind regards,
Wouter

Flip Feng Shui: Perturbation attacks made it to VMs

I’ve been reading up on the Flip Feng Shui: Hammering a Needle in the Software Stack paper, and I’m enjoying that the common smart card attack considerations are coming to more mainstream software considerations.

From the paper:

We describe Flip Feng Shui (FFS), a new exploitation vector that allows an attacker to induce bit flips over arbitrary physical memory in a fully controlled way. FFS relies on two underlying primitives: (i) the ability to induce bit flips in controlled (but not predetermined) physical memory pages; (ii) the ability to control the physical memory layout to reverse-map a target physical page into a virtual memory address un- der attacker control.

This first item we call “perturbation attacks” in smartcard domain. We do those attacks all the time, by giving our poor smartcards power spikes it really shouldn’t be exposed to, prodding it with probing needles too small for the human eye, shooting it with freaking lasers, … you know: standard Monday morning stuff in the office*.

Because we’ve been doing this for ±20 years now in this domain, it takes a while for me to understand a statement like the following is not a joke:

existing cryptographic software is wholly unequipped to counter it, given that “bit flipping is not part of their threat model”.

Because in my world, bit flips are a given, considering that there is an attacker playing with the smartcard. Monday morning remember?

So how does this attack work?

The attack (mis)uses memory de-duplication, i.e. a feature in the host hypervisor that sees that the page of memory of one VM is identical to another one VM’s. When this is enabled, the host hypervisor then maps both these pages to the same page (to reduce actual used physical memory by 40-70%!). If the attacker was the one who created that page originally, he now owns the actual physical page. As long as the host software thinks this page’s content has not changed, the victim VM will read the attacker’s physical page.

So the attacker then does a Rowhammer attack to cause a bit to flip in the part of “his” page. As Rowhammer is a physical side-effect that ‘should not happen’, the host hypervisor does not see the page as changed, even though it is. So now the attacker has just caused a bit flip in his own and, more importantly, this victim’s memory.

Flipping a bit in say a RSA public key allows the attacker to factor that modified key, and generate the appropriate secret key to match. If the attacker does this with the RSA key say used to authenticate root access for SSH, or the signature key for package updates of Linux, he now has full control over that machine.

Neat! (In smartcard world we usually attack the secret key, because of how the protocols are used.)

Theory or practice?

Now, to successfully pull off this attack, several things have to be possible for the attacker:

  • predicting the memory content (this excludes attacks on confidential information such as secret keys),
  • memory de-duplication must be active (so disabling that, or setting it to “only zero pages”, seems prudent),
  • the attacker must be running his VM on the same physical machine as the victim’s VM (I don’t know if this is a realistic scenario. More on it below)
  • the memory must be sensitive to something like Rowhammer (so ECC memory is yet again a good idea, it will reduce the chances of this significantly)

Realistic to be the neighbour of your victim VM?

This attack depends on being able to run the attack VM on the same hardware as the victim VM. I have no well-founded grounds to guess if this is a realistic assumption.

I can think of the following situations where that is possible:

  • The pool of actual hardware is pretty small compared to the amount of VMs, because the hardware is very beefy or the VMs are small.
  • The amount of instances of the victim VMs is pretty big, because it is a standard VM replicated many, many times. I think about situations like massively parallel computing or streaming (Netflix?).
  • Or the targeted page is very common, and here I’m thinking of the signature files for updates for example, or company wide backup root accounts.

My conclusion: stay calm and …

Considering all the complexity of this attack, I don’t see it worm all over the Internet soon. It is however a cool warning that attack can and do cross over from these various fields.

I wonder when they’ll realise they can also apply this attack to modify the running code of say the password check routine

NSA Equation Group’s exploits for sale?

There is a persistent and fairly believable rumor going around that a significant amount of the NSA exploits are for sale. To convince potential buyers that they have the goods, the sellers have apparently dumped a ±250MB package of 3-year old exploits and implants for firewalls.
Looking at the descriptions Mustafa Al-Bassam extracted from the dump, it seems plausible that at least these firewall exploits are from a government outfit like the NSA or GCHQ based on the terminology and the typical codenames like “WOBBLYLLAMA”, and the kind of firewalls targeted.

Regardless whether this is the real deal (and whether more than just these firewall exploits are up for grabs), this, in my view highlights the main problems with seeing offensive hacking as as the best defence:

  • Backdoors and exploits are often single-use: using them as attacker is risky, as your target may be recording network traffic and recover the attack. This was also explicitly mentioned by Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group, i.e. the NSA hackers whose toys are apparently now for sale. Or mishandling a spearfish and lose your nation state quality three 0-days on iOS to Apple, an expensive mistake as a full weaponised remote rooting of iOS is easily $100.000+ value.
    As an aside, their 2007 hardware toy catalogue leaked some time ago, a fun read for people like me.
  • Amusingly, the NSA actually does this eavesdropping to get other organizations’ offensive hacking tools. There are some convincing theories that this ‘leaked exploits sale’ is actually one of the other organisations (China and Russia have been mentioned) getting back at them.
  • Somewhere in these offensive organizations there is a weapons cache of these exploits, and it just takes is one disgruntled employee with access to it and the desire to leak it. After all, it is these hackers’ full-time job and passion to break in and out of highly secure environments, and they have all the tools for this. (This is the most likely explanation).
  • Once the attacks are out, others can also use it against you! The NSA has the story that “NObody But US” (NOBUS) can exploit these things, and use that as an argument not to inform the American companies whose products are at risk in this way. So now, Cisco (and other) firewall vendors are scrambling to make a bug fix (which takes a few days to weeks), and actually may impact other products too.
    Then the users of these firewalls have to actually deploy the bug fix (which they’ll be reluctant to do for reliability reasons, if the users even know that the bug fix exists, taking another few days to many months), and all the while savvy and assertive attackers can hack these firewalls at their leisure.

I still have this naive hope that these blowups will change policy for these organizations to lean to the defensive more. Realistically though, they’ll double down: both hide their exploit development better and come down even harder on leakers and whistleblowers.

And of course thus continues the arms race, making us all less secure in the long term for a short term gain. Sigh.

Your friendly professionally paranoid,

Wouter

P.S. this may of course just be a disinformation campaign or a fundraiser. One never knows… (until ±30-50 years later when the documents get declassified)

Voynich manuscript reproduction coming

Oh, in the world of infinite resources, I definitely lust for this, for the sheer having as a token of cryptography run rampant: The Voynich manuscript is going to be reproduced at a staggering $10,000 price tag.
Much more reasonably priced reprints are available via Yale books or Amazon, and the scans are available for free.

The Voynich manuscript has been intriguing cryptoanalysists for ages, with its cryptic almost-sensible texts, pictures of not-quite-real plants and animals. There are interesting attempts claiming to have partially decoded it here and here.

I, for one, think XKCD still has the best explanation of the origins of this intriguing manuscript:

 

XKCD's take on the Voynich manuscript

iPhone reset

I just had my iPhone 6s refusing to respond (black screen, no reaction on any of the individual buttons). Slight stress peak, as a working phone is pretty important to me 😉

The solution was to hold the home button (the round button on the front bottom) and the power button (right upper side button) at the same time for about 10 seconds (just wait until the phone shows the Apple logo).

The iPhone then reboots, asks for your passphrase and runs as usual. Interestingly, it apparently keeps the SIM card powered, as I did not need to enter the PIN for the SIM card.

Reference: Apple’s article on this.