Norway has employed contact tracing as one of the measures in the fight against the Covid19 pandemic with its "Smittestopp" application. That application was hailed as safe to use and was even recommended by the Norwegian prime minister who prompted the public to download and use the app (link in Norwegian). While urgent situations require urgent measures and I personally consider the app a step in the right direction, there are serious technical/information security objections about the way Norway has implemented it. Some of them concern the structure of tracing applications in general, whereas others are specific to how the Simula Lab and FHI have chosen to roll it out. I offer these opinions as an active infosec researcher and IT practitioner. I am employed by the University of Oslo and I consult for a private cybersecurity firm, but I declare openly that I have no conflict of interest with the authors that made the "Smittestopp" app, neither I express in this article the views of the University of Oslo nor Steelcyber Scientific. Opinions are my own.
It is my assertion that people should think twice before downloading and using the "Smittestopp" application in its current form/implementation. This is especially true for people that use older Android (versions 8 and 9) mobile devices, as well as older versions of iPhones AND perform important (business critical) functions with them: e-banking, logging in to sensitive systems, etc.
Before I list the technical objections in support of my assertion, it's useful for the reader to read excellent general references on how contact tracing works in principle. The Norwegian implementation follows the same principle, yet with distinct choices that really degrade the quality of the solution.
My first objection has to do with the accuracy of Bluetooth to estimate the proximity of other devices. This is not only a problem in the Norwegian implementation but a global issue. In particular, the Bluetooth protocol uses the Received Signal Strength Indicator (RSSI) to measure distance between devices. The principle is that the stronger the signal, the closer the devices are to each other. However, different bluetooth chipset implementations measure RSSI in slightly different ways. In addition, a particular variant of Bluetooth called 'Bluetooth Low Energy' or 'Bluetooth LE' that seems to be available in most mobile phones and is used for proximity sensing is very noisy. It's transmission frequency often interferes with other devices in the 2.4 GHz range, such as older WiFi routers, unshielded USB cables, microwave ovens. The device would do its best to extend the 'beacons' (pulses that use to advertise the presence and availability) by keeping constant time and regulating the transmission power to overcome other sources of interference. In such a frequency congested environment, a real distance of 1.5 meters could really be estimated as 2.5 meters (false negative), or a real distance of 2.5 meters could be estimated to over 1.5 meters (false positive). The reliability of the collected data will certainly have to be software corrected by unproven heuristics. Bluetooth 5.1 will improve the data reliability, however, as it came out in the second half of 2019, we will not see it being adopted by mobile phone vendors until sometime in 2020/21. Most devices operate with the noisy and inaccurate Bluetooth LE, as I write this.
My second objection is with the cyber security aspects of having your Bluetooth LE advertising all the time in the open important device credentials, exchanging data and all this in an extended transmission range. Amongst the various things advertised in the open by a Covid19 tracking app (the Norwegian "Smittestopp" is no exception) is a unique device identifier (or UUID). The idea here is to be able to identify you with the rest of the devices that are in proximity and have your phone say "Hi, I am here! Are you there?", without revealing your real world identity (name, phone number) to the rest of the mobile phone users. This is an essential aspect of user privacy because the theory says that an adversary can use unique identifiers of your phones (MAC address, IMEI) to get back to you. Your mobile phone provider for example, logs the IMEI address and relates IMEI addresses and phone numbers. The thing here is that even if the Simula/FHI app authors take all the precautions in the world to make a good, anonymous UUID to broadcast your presence, they cannot control other vulnerabilities that exist in the implementation protocol. These vulnerabilities exist for a wide range of mobile phone bluetooth chipsets and mobile operating systems. Various Android Bluetooth and Apple Bluetooth implementations have been found vulnerable and historically, the abuse of the Bluetooth protocol in what we call as bluejacking/bluesnarfing attacks has caused problems. Remember, Bluetooth LE can transmit sometimes up to 100 meters, check the specs of the protocol, it can certainly do that to try and overcome noisy environments by regulating transmission power. That's music to the ears of an adversary who can exploit these weaknesses to execute arbitrary code in your vulnerable mobile phone. This can seriously jeopardize anonymity and mobile device integrity.
So far, I hope I have established a good basis that justifies why bluetooth can provide unreliable data and open the door to attacks, let alone the things it will do to the battery of a mobile phone. This is not specific to the Norwegian implementation of the app. The following paragraphs will elaborate on the objections I have on the peculiar aspects of the Norwegian implementation.
First of all, I have to pick on the fact that Simula/FHI have claimed the shortness of time for not releasing open source code for the purposes of transparency and critical system review. I regret saying that this is shockingly contrary to every good research practice. When a public institution/research entity that is funded in general by taxpayer's money (even if not for the purposes of the "smittestop" project) should never go down that way. You are asking people to trust you with their personal data. We (experts and practitioners) have no way to see critical issues such as how you generate the UUID and what exactly are you doing to handle the Bluetooth inaccuracies. I will also need to criticize their statements that Open Source does not contribute to privacy. The issue here is not to contest whether closed source or open source is more suitable to safeguard privacy. We can easily refute their arguments by stating that the Linux kernel whose source code and is open at large is used by mission/life critical systems successfully. The issue is how one can enable a process for a suitable number of experts to comment on and improve. I have no doubt that Simula and FHI have capable people. I doubt that they and the (IMHO) intransparently appointed panel of external experts have enough experties to secure systems whose scope and scale are similar to the needs of the task in such a short time. Have these people approved the app as safe and reliable and if yes, how did they miss issues pointed out here as well as many other ones?
Finally, the transparency and expert review measures do not concern only the source code but the entire infrastructure including central storage/processing activities. We are assured that all relevant measures have been taken to safeguard the data, yet no standards that these procedures/infrastructure adhere to are mentioned. I wonder why.
So far, I hope I have established a good basis that justifies why bluetooth can provide unreliable data and open the door to attacks, let alone the things it will do to the battery of a mobile phone. This is not specific to the Norwegian implementation of the app. The following paragraphs will elaborate on the objections I have on the peculiar aspects of the Norwegian implementation.
First of all, I have to pick on the fact that Simula/FHI have claimed the shortness of time for not releasing open source code for the purposes of transparency and critical system review. I regret saying that this is shockingly contrary to every good research practice. When a public institution/research entity that is funded in general by taxpayer's money (even if not for the purposes of the "smittestop" project) should never go down that way. You are asking people to trust you with their personal data. We (experts and practitioners) have no way to see critical issues such as how you generate the UUID and what exactly are you doing to handle the Bluetooth inaccuracies. I will also need to criticize their statements that Open Source does not contribute to privacy. The issue here is not to contest whether closed source or open source is more suitable to safeguard privacy. We can easily refute their arguments by stating that the Linux kernel whose source code and is open at large is used by mission/life critical systems successfully. The issue is how one can enable a process for a suitable number of experts to comment on and improve. I have no doubt that Simula and FHI have capable people. I doubt that they and the (IMHO) intransparently appointed panel of external experts have enough experties to secure systems whose scope and scale are similar to the needs of the task in such a short time. Have these people approved the app as safe and reliable and if yes, how did they miss issues pointed out here as well as many other ones?
Finally, the transparency and expert review measures do not concern only the source code but the entire infrastructure including central storage/processing activities. We are assured that all relevant measures have been taken to safeguard the data, yet no standards that these procedures/infrastructure adhere to are mentioned. I wonder why.