Official government COVID-19 apps comes with security threats

COVID-19 is one of the worst public health crisis ever faced by humans since the 1918 flu pandemic.

Governments around the world launched their own version of mobile apps to help their citizens track symptoms and virus infections. However, security researchers at ZeroFOX Alpha Team uncovered various privacy concerns and security vulnerabilities —including backdoors with these apps.

The Iranian government released an Android app called AC19 on the Iranian app store known as CafeBazaar. The app claimed that it can detect whether or not people are infected by the virus and was released by the Ministry of Health. It was to take advantage of the confusion and fear gripping many parts of Iran about COVID-19 to boost Tehran surveillance capabilities.

When Iranian users downloaded the app, they are prompted to verify their phone number despite the fact that the government has access to all phone numbers via its control of the country’s cell providers. Once users provided their phone number, they are prompted to give the app permission to send precise location data to the government servers.

In addition, there is a copycat app called CoronaApp created by threat actors that is available for direct download by Iranian citizens rather than via the Google Play Store. As a result, the app is not subjected to the normal vetting process that might protect these users from malicious intentions. However, many citizens in Iran cannot access the official Google Play store due to sanctions, so they are more likely to download the unvetted apps.

Once installed, the CoronaApp does request for permission to access the user’s location, camera, internet data and system information, and to write to external storage. It is this particular combination of permissions requested that demonstrates the developer intent to access sensitive user information.

Separately, the Colombian government released mobile app called CoronaApp-Colombia on Google Play store to help people track potential COVID-19 symptoms in March 2020. However, ZeroFOX researchers discovered that the app included vulnerabilities relating to how it communicates over HTTP, affecting the privacy of more than 100,000 users.

As of March 25, the app with version number 1.2.9 communicates insecurely with the API server throughout the app workflow. Specifically, it uses HTTP instead of HTTPS or other more secure protocol for server communications. By making these insecure server calls to relay users’ personal data, CoronApp-Columbia could put sensitive user health and personal information at risk of being compromised.

But there is a shred of good news. The Columbian CERT fixed the vulnerabilities three days later after ZeroFOX Alpha Team submitted the vulnerability, listed as CVE-2018-11504 on MITRE, to them on March 26.

Last but not least, the Italian government created region-specific apps for tracking coronavirus symptoms as the country is one of the places that the COVID-19 pandemic has hit the worst.

As a result of the greater number of government-sanctioned apps, users are less certain of which COVID-19 mobile apps are legitimate.

Threat actors are taking advantage of this confusion, and inconsistency in the apps releases and availability to launch malicious copycats that contains backdoors.

ZeroFox Alpha Team found 12 android application packages related to the attack campaign. 11 of these packages were found to use various methods of obfuscation.

The first app analysed by Alpha Team was discovered to use a signing certificate where the signer was “Raven” with a location in Baltimore, likely a reference to the Baltimore Ravens NFL team. Furthermore, every other app analysed by the team used these signing certificate and issuer details.

The backdoor is activated when the Android app receives a BOOT_COMPLETED event when the boots, or when the app is opened.

The researchers advised governments with COVID-19 related apps or those thinking about releasing new ones to ensure the consistency in where the apps can be downloaded as well as in their appearance to help avoid the spread of malicious doppelgängers. Exercising due diligence during the development process will help secure the app and avoid putting citizens at further privacy risks.

Are you using Zoom? Your personal data is being leaked and you could be vulnerable to being hacked

Zoom is dealing with one hot potato after one another. They recently got out of a situation where its iOS app was found to be sharing data with Facebook secretly by updating the iOS app.

Now, they are dealing with another problem due to how the software’s Company Directory feature works.

Zoom groups users who signed up using the same company email domain together to make searches and calls easier with colleagues. So when users signs up with their private email address to join Zoom, they have had thousands of strangers added to their contact list as they were perceived to be working under the same organisation. With this, you can get insight to all subscribed users of that provider, which include their full name, physical address, profile picture and status.

However, there is a little bit of good news. Users of standard email providers such as Gmail, Hotmail and Yahoo are not affected as Zoom blacklisted them. Furthermore, the company officially requires users to submit a request for their non-standard domains to be blacklisted.

But that is not the end of bad news for the company.

It is also found that Zoom also converts any URLs into hyperlinks. This could then be used maliciously where cybercriminals could send you a Universal Naming Convention (UNC) path instead of a web link.

UNC paths are typically used for networking and file sharing. An unsuspecting user could click on the link sent via Zoom, which will then make Windows try to connect to the remote host using Server Message Block (SMB) network file-sharing protocol. By default, Windows will send the user’s login name and their NTLM password hash to this host. The NTLM password hash could easily be cracked and put your computer at risk from hacking.

An opinion on improving voice user interface while ensuring privacy

Voice user interface is going to be one of the ways we interact with our devices as we go about our daily lives. It is just a very intuitive way for us because we communicate primarily via voice with text and images to complement.

But there still are various problems that need people to work on them to improve the overall experience. One of it is related to how the AI behind voice user interface can interact with us more naturally, like how we interact with fellow human beings.

This article written by Cheryl Platz got me thinking about that. It also covered a little on privacy and why it is a contributing factor that make it difficult for current generation of AIs to speak more naturally and understand the context when we speak. Unless, companies don’t give a shit about our privacy and start collecting even more data.

In this article, I am going to share what I think could help improve the AI and ensure user privacy.

Current Implementations and Limitations

What an AI needs to be better at understanding and responding in ways most useful to us are processing power, a good neural network that allows it to self-learn, and a database to store and retrieve whatever it has learnt.

The cloud is the best way for an AI to gain access to a huge amount of processing power and large enough database. Companies like Amazon and Microsoft offer cloud computing and storage services via their AWS and Azure platform respectively at very low cost. Even Google offers such services via their Compute Engine.

The problem with the cloud is reduced level of confidence when privacy is involved. Anything you store up there is vulnerable and available for wholesale retrieval through security flaws or misconfigurations. Companies could choose to encrypt those data via end-to-end encryption to help with protect user’s privacy but the problem is the master keys are owned by said companies. They could decrypt those data whenever they want.

Or you could do it like what Apple did with Siri, storing data locally, and use Differential Privacy to help ensure anonymity but it reduces the AI capabilities because it doesn’t have access to sufficient amount of personal data. Two, Siri runs on devices like Apple Watch, iPhones and iPads, which could be a problem when it comes to processing and compute capabilities, and having enough information to understand the user.

Although those devices have more processing power than room-sized mainframes from decades ago, it’s still not enough, energy-efficiency and capability wise, to handle highly complex neural networks for better experience with voice user interfaces.

Apple did try to change that with its A11 Bionic SoC that has a neural engine. Companies like Qualcomm, Imagination Technologies, and even NVIDIA are also contributing to increase local processing power with energy efficiency for AI through their respective CPU and GPU products.

Possible Solution

The work on the hardware by companies should continue so that there will be even more powerful and energy efficient processors for AI to use.

In addition to that, what we need is a standard, wireless-based protocol (maybe bluetooth) for the AI on our devices, irrespective of companies, to talk to each other when they are near to each other and in our home network. This way, the AI on each of those devices can share information and perform distributed computing, thereby improving its accuracy, overall understanding of the user, and respond accordingly.

A common software kernel is also necessary to provide different implementation of neural network a standardized way of doing distributed computing efficiently and effectively.

So now, imagine Siri talking to Alexa, Google Assistant or even Cortana via this protocol and vice versa.

Taking privacy into account, information exchanged via this protocol should be encrypted by default with keys owned only by the user. Any data created or stored should only reside on device also encrypted and nowhere else. Taking a page out of Apple’s playbook, the generated keys should come from some kind of hardware-based “Secure Enclave”.

To further improve the neural network, Differential Privacy should be applied on any query or information sent by the AI to the cloud for processing.

Conclusion

The above is really just my personal thought of how current the AIs powering voice user interfaces can be improved.

At the end, it’s really up to the companies to decide if they want to come together and improve all our lives taking into account our privacy and security.