Search This Blog

Monday, April 20, 2020

Infosec security considerations for the Norwegian smittestopp covid19 tracking application

Norway has employed contact tracing as one of the measures in the fight against the Covid19 pandemic with its "Smittestopp" application. That application was hailed as safe to use and was even recommended by the Norwegian prime minister who prompted the public to download and use the app (link in Norwegian). While urgent situations require urgent measures and I personally consider the app a step in the right direction, there are serious technical/information security objections about the way Norway has implemented it. Some of them concern the structure of tracing applications in general, whereas others are specific to how the Simula Lab and FHI have chosen to roll it out.  I offer these opinions as an active infosec researcher and IT practitioner. I am employed by the University of Oslo and I consult for a private cybersecurity firm, but I declare openly that I have no conflict of interest with the authors that made the "Smittestopp" app, neither I express in this article the views of the University of Oslo nor Steelcyber Scientific. Opinions are my own. 

It is my assertion that people should think twice before downloading and using the "Smittestopp" application in its current form/implementation. This is especially true for people that use older Android (versions 8 and 9) mobile devices, as well as older versions of iPhones AND perform important (business critical) functions with them: e-banking, logging in  to sensitive systems, etc. 
 
Before I list the technical objections in support of my assertion, it's useful for the reader to read excellent general references on how contact tracing works in principle. The Norwegian implementation follows the same principle, yet with distinct choices that really degrade the quality of the solution. 

My first objection has to do with the accuracy of Bluetooth to estimate the proximity of other devices. This is not only a problem in the Norwegian implementation but a global issue. In particular, the Bluetooth protocol uses the Received Signal Strength Indicator (RSSI) to measure distance between devices. The principle is that the stronger the signal, the closer the devices are to each other. However, different bluetooth chipset implementations measure RSSI in slightly different ways. In addition, a particular variant of Bluetooth called 'Bluetooth Low Energy' or 'Bluetooth LE' that seems to be available in most mobile phones and is used for proximity sensing is very noisy. It's transmission frequency often interferes with other devices in the 2.4 GHz range, such as older WiFi routers, unshielded USB cables, microwave ovens. The device would do its best to extend the 'beacons' (pulses that use to advertise the presence and availability) by keeping constant time and regulating the transmission power to overcome other sources of interference. In such a frequency congested environment, a real distance of 1.5 meters could really be estimated as 2.5 meters (false negative), or a real distance of 2.5 meters could be  estimated to over 1.5 meters (false positive). The reliability of the collected data will certainly have to be software corrected by unproven heuristics. Bluetooth 5.1 will improve the data reliability, however, as it came out in the second half of 2019, we will not see it being adopted by mobile phone vendors until sometime in 2020/21. Most devices operate with the noisy and inaccurate Bluetooth LE, as I write this. 

My second objection is with the cyber security aspects of having your Bluetooth LE advertising all the time in the open important device credentials, exchanging data and all this in an extended transmission range. Amongst the various things advertised in the open by a Covid19 tracking app (the Norwegian "Smittestopp" is no exception) is a unique device identifier (or UUID). The idea here is to be able to identify you with the rest of the devices that are in proximity and have your phone say "Hi, I am here! Are you there?", without revealing your real world identity (name, phone number) to the rest of the mobile phone users. This is an essential aspect of user privacy because the theory says that an adversary can use unique identifiers of your phones (MAC address, IMEI) to get back to you. Your mobile phone provider for example, logs the IMEI address and relates IMEI addresses and phone numbers. The thing here is that even if the Simula/FHI app authors take all the precautions in the world to make a good, anonymous UUID to broadcast your presence, they cannot control other vulnerabilities that exist in the implementation protocol. These vulnerabilities exist for a wide range of mobile phone bluetooth chipsets and mobile operating systems. Various Android Bluetooth and Apple Bluetooth implementations have been found vulnerable and historically, the abuse of the Bluetooth protocol in what we call as bluejacking/bluesnarfing attacks has caused problems. Remember, Bluetooth LE can transmit sometimes up to 100 meters, check the specs of the protocol, it can certainly do that to try and overcome noisy environments by regulating transmission power. That's music to the ears of an adversary who can exploit these weaknesses to execute arbitrary code in your vulnerable mobile phone. This can seriously jeopardize anonymity and mobile device integrity.

So far, I hope I have established a good basis that justifies why bluetooth can provide unreliable data and open the door to attacks, let alone the things it will do to the battery of a mobile phone. This is not specific to the Norwegian implementation of the app. The following paragraphs will elaborate on the objections I have on the peculiar aspects of the Norwegian implementation.

First of all, I have to pick on the fact that Simula/FHI have claimed the shortness of time for not releasing open source code for the purposes of transparency and critical system review. I regret saying that this is shockingly contrary to every good research practice. When a public institution/research entity that is funded in general by taxpayer's money (even if not for the purposes of the "smittestop" project) should never go down that way. You are asking people to trust you with their personal data. We (experts and practitioners) have no way to see critical issues such as how you generate the UUID and what exactly are you doing to handle the Bluetooth inaccuracies. I will also need to criticize their statements that Open Source does not contribute to privacy. The issue here is not to contest whether closed source or open source is more suitable to safeguard privacy.  We can easily refute their arguments by stating that the Linux kernel whose source code and is open at large is used by mission/life critical systems successfully. The issue is how one can enable a process for a suitable number of experts to comment on and improve. I have no doubt that Simula and FHI have capable people. I doubt that they and the (IMHO) intransparently appointed panel of external experts have enough experties to secure systems whose scope and scale are similar to the needs of the task in such a short time. Have these people approved the app as safe and reliable and if yes, how did they miss issues pointed out here as well as many other ones?

Finally, the transparency and expert review measures do not concern only the source code but the entire infrastructure including central storage/processing activities. We are assured that all relevant measures have been taken to safeguard the data, yet no standards that these procedures/infrastructure adhere to are mentioned. I wonder why.

Thursday, March 19, 2020

Steps to increase your online/Internet usage efficiency during the coronavirus outbreak

The world is in the process of adapting to remote work/home office solutions. This is something that is going to last throughout the coronavirus outbreak and is a practice/paradigm that is going to remain long after the world tackles the covid19 pandemic. The world wide telecommunications infrastructure is as critical as the health system facilities and the transportation/supply chain. We need to keep the world going and if we are not coordinated and able to communicate/exchange information, this is not going to be good for us. 

As the world is correctly trying to flaten the curve of the covid19 cases to ease the burden on national/regional health systems, it also needs to flaten the load on the telecommunications infrastructure for the same reasons. Regional, national and international data networks are already facing traffic capacity problems. This is because a large number of the wired and wireless services (Mobile telephony, home broadband services) operate on a contention ratio principle. In simple terms, if we have for example 10000 users in an infrastructure, the data networks are designed to serve only 1000 of them simultaneously. The 1001st simultaneous user would either experience drop of service or degraded service quality (slow not well functioning connections). While the contention ratio principle is not directly applicable to more modern networks (say Fiber to Home/Premises), it applies to a large part of the world, where copper/telephone wire is still the medium of offering broadband services (ADSL/ADSL+). Consequently, even if you are in a country where it has very good capacity on broadband networks and telephony (South Korea,Japan, Scandinavian countries), your online actions still impact the infrastructure on countries that are less well equipped in their infrastructure (sadly most other countries, including Europe, the US, Africa, India, China).   

If these problems increase and outpace the efforts of Internet Service and Telecommunication providers to gradually increase (where possible) the capacity, ISPs will start rationing/prioritize the traffic and this will impact everyone in a negative. As a network and devops engineer, I already see this problem and I would like to suggest simple steps that will make a big impact on traffic numbers and will help everyone.

1. Avoid sending/forwarding those long 'funny' viral videos on social media/WhatsApp/Viber chat: If you are at home on an ADSL connection which is asymmetric, or on a mobile data plan in a densely populated area, you are using scarce valuable capacity (and possibly money, eating up your account credit). Is it really important that you send the video? Can you just send a text describing the situation or even a voice call, when you check on your folks/friends instead and talk about it? That might be preferable.

2. Use video calls only when absolutely necessary: That might sound harsh, right now that most of us are closed at home and we need human contact. If for example, you are a psychologist and you need visual on your patient, do use it by all means. However, if you want to call someone for a practical issue (shopping, arrange something) do you really have to video call? If something is short, practical and can be done by voice, please think before pressing the Video call button. Choose the voice only option instead. This is especially true for work online meetings with a large number of participants. If you only need to listen and watch a screencast from the presenter in an online meeting, why do you really need your camera on?

3. Please throttle down your torrent/P2P traffic: If you share large files via torrent from home/work connections, consider throttling down (limiting) the traffic both in terms of speed and number of torrent connections. Most P2P torrent applications allow you to do that. I know it is tempting to use the capacity of a good fiber connection with your hard earned money. However, be considerate to others and use the capacity you have in a responsible manner. 

4. Use Netflix/YouTube and other content streaming providers responsibly: Watching a movie/listening to music is an important entertainment human need. However, considering doing it in the following manner:
  • Try not to segregate your movie choices (your partner watches one, your kids another and you on your own, just because you have your own device). It's good for the parents from time to time to watch kids movies. Try to find content that you can watch altogether from one device. Streaming services account for a very large amount of the world-wide Internet traffic. Reducing that in a responsible manner will increase network capacity and server energy bills (yes, believe it or not, the energy consumption is a fact, backend servers do consume a lot of electricity).
  • If you find that you keep watching the same videos (music, other) from YouTube again and again, do consider using tools to download them and keep playing them from your local hard drive whenever you want offline. There might be of course legal issues with doing this. However, as long as you do not use your local playing for profit (unlikely that you are going to have a gig in your home for money), you should be OK. Doing that in times like this means you are a responsible person and not someone that violates copyright or tries to rob YouTube of advertisement revenue. This is my own opinion of course.  
  • Please do not stream movies while you are not watching them. 

5. Please avoid queuing on call center telephone lines when possible: How many times have you been annoyed listening to that 'elevator' music while waiting to get in touch with the service desk and you have listened to the 'Your call is important for us, all of our reps are busy, please wait while we try to help you' kind of message? Well, many call centers do offer the option of calling you back at the earliest opportunity. If they do, please exercise that option, rather than keeping the phone connection playing this for an hour. You are doing yourself and the phone infrastructure a favor. 

6. Use data compression to keep the size of your files down before sending/downloading them, improve network response times and (please) do not attach them to emails
  • Compression is not applicable for photos/images and videos and music files as these might already be compressed or may not be compressible. However, if you have plenty of large text documents (Word, Power Point, Spreadsheets, PDF documents, programming language source code) that you need to send/download from work, consider using compressions tools like these to reduce their size before a transfer. This will reduce both the burden in communication networks as well as the transfer time. 
  • For the most advanced users, compression is a technology that is used to improve interactive response on latency sensitive traffic. A great example of this is the SSH compression option. When this is used in conjunction with X forwarding to gain access to remote desktop environments, it improves both bandwidth consumption as well as the response time of remote desktop environments. 
  • Finally, compressed or uncompressed files, even if it is within the few megabytes size limit that mail servers accept, please avoid attaching large files on emails. This overloads mail servers and as email is critical for many business functions, I recommend using specific file sharing services instead of email attachments. Examples of services that offer file sharing functionality are given here.  
Stay safe and use the Internet efficiently and in a responsible manner!

Sunday, December 2, 2018

First sysadmin/devops impression on RHEL 8 (article 1 -- initial impressions and installation overview )

If you are a Linux techie and fan of the RedHat ecosystem, you might have received word that the beta version of RHEL 8 is out. Years ago, I did a popular cover story for RHEL 7. It seems natural that I should continue the tradition and do the same with RHEL 8, even as it is still being polished. Chances are that by the time the final GA/production release is out, certain performance and versioning bits might be slightly different, so you are warned that this blog post will change, to reflect the expected changes.

Let's start with a visual which is really the first thing you are going to see if you start the graphical target, which RedHat now calls the 'Workstation' environment group (more on that later, when I describe the installation bits). I bet it will look familiar to you (excluding the wallpaper) especially if you are a Fedora 28/29 user.



Yes, it is GNOME 3.28, in particular version 3.28.2, the same as Fedora 28. No surprises there as the Fedora project is used as the testbed for things that will eventually end up in the RHEL release. Wayland is at play by default here, although breath easily, as you can keep X.Org with your binary NVIDIA drivers and your multi-GPU setup (that will not work with Wayland, this is not a RHEL 8 thing).

Other important component versions that mark the RHEL 8 beta release are:
  • the Linux 4.18 kernel, 4.18.0-32.el8 in particular. This is a big and welcome step considering that RHEL 7 is based around the 3.10 kernel, which is really outdated in many respects (the latest at the time of writing was3.10.0-957.1.3.el7). As I write this, both active Fedora versions (28 and 29) have moved to the 4.19 kernel, but it seems that RHEL 8 has touch base with the 4.18 version and is likely to remain with that kernel. System stability and a more conservative environment when it comes to the backporting features and fixes (such as the Spectre and Meltdown patches that have substantial negative performance impact on the 4.20 kernel).
  • The default gcc version is now 8.2.1 20180905, in line with the active Fedora distros. Compare that to RHEL 7's 4.8.5 20150623 also showing its date. Just so that I am not misunderstood, if you run RHEL 7, you could install more modern compilers by using the Redhat's software collection repos (rhel-server-rhscl-7-rpms, the devtoolset-6* and devtoolset-7* yum packages). I emphasize the word *default* here, which means what comes with the basic installation and the simplest of entitlements. 4.8.5 is really out of date, it would make sense if Redhat makes an effort to set the default one to 4.9.4 for RHEL 7.
  • Pythonistas should feel right at home, but they should note that only Python3 is installed by default, version 3.6.6 in particular. Python developers need to explicitly install the available python2 packages. Python 2.7.15-15 is there, but with limited support. Again, that's not Redhat's decision as Python 2 is reaching EOL by the end of 2019. The sooner you migrate your apps to Python3 the better, with or without RHEL 8.

  • Perl fans should find a system wide version of 5.26.2 on RHEL 8. In comparison, RHEL 7 has Perl version 5.16.3. IMHO, if you run something production grade with Perl, you should at least be on 5.24.x these days to get the best performance and functionality. 
  • What you used to do with yum can now be done with dnf. That should not be news to you, especially if you have been following the Fedora releases. The introduction of the dnf tool has to do with important changes in the way software packages are tagged, installed and used (keep reading).

A few words about installing RHEL 8 now, as there are some notable changes there. RHEL 8 seems to organize software content by means of using two software repositories:
  • The BaseOS repo: This includes RPM based packages for the core functionality of the operating system that can be searched, installed/deployed with dnf in pretty much the same way one used to do it with yum in RHEL 7. 
  • The Appstream repo: This includes utilities to run real world workloads (for example databases, web servers, runtime environments) that can be organized either as RPM packages (like in the BaseOS repo) OR as multi-versioned collections (called streams) organized in modules. Modules are RPM extensions and their streams should allow you to choose among different versions of the package.
The concept of Application Streaming should give you the ability to have a module (say X) that offers you the Y and Z versions (streams) of a webserver. If Y is the production and Z the development version of that webserver, the Appstream repo should give you the ability to install X:Y on production systems and X:Z on your development cluster, all from one repo with a single command. You cannot install both versions in parallel on a system (unless you run your webservers in containers), but you should be able to install and run a specific version at a time.

If you are thinking that someone is trying to re-invent the wheel, you are probably right. You could previously achieve the same functionality on RHEL 7 and other platforms with the Software Collections and you could also deploy things like Environment Modules to achieve the same result, albeit at a slightly higher complexity. The idea is to perform everything here from specific repos and via your package manager. Software collections require more repos and they modify your Shell environment in ways that can create complex issues. Well, I am not trying to convince you to use one or the other here. You will be the judge of what works best for you.

There will be an additional article exploring the issue of Application Streaming. For now, this article will conclude with an overview of the RHEL 8 installation. I am going to outline the steps of installing a Virtual Machine hosted guest instance. My host operating system is Fedora 28 with its stock KVM/QEMU components. I dedicated 4 vCPUs, 4 Gigs of RAM, a functioning NAT enabled virtual NIC (to ensure that I can reach Redhat's subscription management infrastructure) and about 20 Gigs of a VirtIO disk for my qcow2 image.

There are many ways to install a RHEL 8 instance and should start with Redhat's Customer portal. The one I describe here is the Anaconda graphical installer from the Binart DVD images. You will need an account and an active subscription (that you can obtain by request if you have a portal account). This will enable you to download the beta test distro in a number of ways, as shown below.



I chose to download the 8.0 Beta Binary DVD, although the KVM Guest Image would have worked equally well (I wanted a complete set on a DVD image).

After verifying the SHA-256 checksum, I immediately proceeded to install my guest image and was greeted by the first installation screen, choosing the installation language.


The main 'installation summary' screen feels very familiar to those of you that have recently installed a Fedora distro, although a couple options ('SECURITY POLICY' and 'System Purpose') seem new.


The next step was to chose and test my keyboard layouts. I chose a Nordic (Norwegian), English and Greek keyboards and they seem to work OK.


I *would* suggest that you choose to set your 'Time & Date' settings next, but this is not a good idea. This is additional feedback I would like to pass on the Redhat team. You see, if you go to the 'Time & Date' settings, you choose your time zone and attempt to turn on the Network Time Protocol (NTP) by clicking on the ON/OFF 'Network Time' button, the button will refuse to stay on the 'ON' state.


The seasoned sysadmin/developer might figure out that this is due to the fact that the NTP server was not reachable: Although I had a perfectly ready virtual NIC standing by, this was not enabled by default. The correct order is thus to jump first to the 'Network & Host Name' settings, enable the NIC and ensure you are online.


I can now navigate back to the 'Time & Date' settings and verify that NTP is on ('Network Time' button is set to 'ON'). Timing is important. I feel that turning the configured NIC on by default OR alternatively displaying some kind of error message (like 'Cannot turn Network Time on because your NIC is inactive')when the NIC is turned off would result in a smoother user experience for an Enterprise Operating System.


Moving on to the next item of interest, the 'Software Selection' settings allow you to customize what will be installed (you can always modify this post installation). The distinction between 'Server' and 'Workstation' on the Base Environment is not new. If you want something customized to combine aspects of both, your mileage may vary. I would choose either 'Server' if you do not want a graphical environment or the 'Workstation' option (this was my choice for the demo I describe here) for a GNOME graphical environment. As explained, you can always add/remove stuff after the initial installation.


The 'Installation Destination' setting offers no surprise. Here, you can choose your installation drive and possibly encrypt your partitions. Nothing new here.


What's new in RHEL 8 are the following couple of screen settings. In particular, the 'SECURITY POLICY' setting, one can choose to customize the system between two policies. These policies ensure that certain components that have to do with firewalls, audit data and other OS settings are configured in a way that adheres to strict standard rules, to maximize your security. You should always check with your resident Information Security Officer, but as a rule of thumb, if you run the system in a bank or your system is involved in processing Credit Card data, the PCI-DSS v3 Baseline policy is a good one to choose. Alternatively, you can select the OSPP protection profile for general purpose OSes.


Finally, the 'System Purpose' screen lets you categorize the Role, SLA and usage of the system. I am not clear as to how Redhat will uses these settings as part of their Support and system inventory processes, suffice to say that collecting these data can help them dedicate their resources more efficiently in a support case.


Hit the 'Begin Installation' button of the installation summary screen and while the installer is progressing, you can set the root account password and an account. Eventually, when you reboot, you should be able to see the login screen of the graphical target.

We are not done yet. The system has installed, but it has not been registered with a subscription. To do that, you will need to obtain root, ensure you have Internet access and then just type the following two commands on the shell :

subscription-manager register --username YOUR_USERNAME --password YOUR_PASSWORD

subscription-manager attach --auto

The first command will register the system to the Red Hat Subscription Management platform (you obviously need to replace YOUR_USERNAME and YOUR_PASSWORD with your own account credentials). The second command will ensure that your system will attach to the beta entitlement. When you are done, here's how it should look on the Subscription Management Portal (uuid, username and Serial Number removed):


That's it, the system is now ready for use. Stay tuned for more RHEL 8 tests and analysis!