This is the 4th part of our blog series "Things that security auditors will nag about and why you shouldn't ignore them". In these articles, Nixu's security consultants explain issues that often come up when assessing the security of web applications, platforms and other computer systems.
Easter eggs and salami attacks – what has your code eaten?
As Easter is approaching, this is the perfect time to remind you about Easter eggs, salami attacks, backdoors, and other unauthorized or malicious code that your code may have eaten. Easter eggs are meant to be entertaining extra features, but other unauthorized features may steal information or cause malfunctions. Ensuring that the application does not contain any unwanted or malicious code is also required by the OWASP Application Security Verification Standard. Let's take a look at different types of unauthorized code and what kind of possibilities you have for finding and blocking it and monitoring for malicious behavior.
Different types of unauthorized code
Easter eggs: Easter eggs are hidden or extra features in a game or another application. An example of an Easter egg is Google's T-Rex game that you can play on the Google search home page when the internet connection is down. Easter eggs can be entertaining, but in essence, anything that your application does not require should not be there.
Salami attack: A salami attack is a series of smaller attacks that together result in a large-scale attack. For example, slicing fractions of cents from each transaction wouldn't show in calculations because of rounding up sums, but after billions of transactions, you can steal a considerable amount.
In addition to the 'edible' extra code, there are a few other types of malicious additions that may make your application phone home:
Backdoor: A method that allows to bypass the usual authentication or security measures and get access to a network, computer, or an application, is a backdoor. The backdoor can be hidden in the program or firmware, or a separate program that opens or allows remote connections. A backdoor can also be related to getting access to encrypted data.
Insecure debugging features: A debugging feature allowing developers or system operators to investigate application behavior can become a backdoor if the debugging feature does not have strong enough authentication or contains a vulnerability. The feature design might be insecure, or it may be accidentally enabled also in production release versions of the software. Another example of an insecure debugging feature is sending extensive or inadequately protected device analytics to the developers of the system.
Rootkit: Rootkit is a program that attempts to stay hidden on a computer. The purpose of the rootkit can be to act as a backdoor, control your computer, or steal and send information, such as credit card information or passwords out of the system. Rootkits can be hidden in the firmware, bootloader, memory, or kernel module of a computer, or in one of the applications that seem to be legit.
Logic bomb: A logic bomb is an intentionally written piece of code that triggers malicious actions when certain conditions are met. The malicious activity can range from deleting files and wiping disks to causing device malfunctions.
Time-bomb: When the malicious activity of a program is triggered by a specific date or time, it's called a time bomb.
Extensive permissions: A mobile application may ask for excessive permissions, such as to use the microphone or the camera, or collect a large number of details of the user.
Open-source libraries may contain hidden malicious code
Malicious code is rare, but many developers can still come across it. In recent years, so-called typosquatting, naming malicious libraries almost identically to legitimate ones, has been a problem, especially with npm and PyPi repositories. The malicious packages, relying on spelling mistakes or not double-checking the correct library name, may include a backdoor or steal credentials or other confidential information.
The repositories you use can also be breached, and there have been cases where Docker images contained malware.
How to find unauthorized code?
Unfortunately, finding unauthorized can be tricky. You can try several of the following methods:
- Use static code analysis. Static code analysis tools may help you in spotting certain types of unauthorized code and in reviewing the source code, but the analyzers will probably miss many things.
- Review code commits and changes. Thoroughly reviewing source code with multiple pairs of eyes can help you to spot oddities. Especially with a limited amount of developers, you may be able to detect suspicious or unauthorized commits. With 3rd party libraries, it may be fishy if a particular part of the code has been modified for no apparent reason. If you didn't download the code from the original repository, you might be able to spot differences by comparing it to the original code. But of course, if the owner of the codebase is deliberately trying to include something malicious, it can be hidden so well that you won't notice.
- Search for dynamic loading of code. The malcode may be loaded on runtime dynamically to avoid detection. The problem with dynamic code loading is that if the origin is not trusted or if there's a data breach, and you don't verify the code authenticity, the code you are loading could be tampered with and contain malicious functionality.
- Search for obfuscated code. Obfuscation is a typical sign of attempting to hide malicious code, especially in web server code. However, sometimes the unauthorized code snippet is hidden in plain sight.
- Run the program in a sandbox. If you have an application with suspected malicious content in an executable form, you can try to run it in a sandbox. The sandbox will observe the application behavior: does the application start other processes, does it attempt to connect to other networks, does it attempt to modify the registry, create files, and so on. Clever malware detect being run in a sandbox and do nothing
Sometimes finding the malicious code is nearly impossible. The ability to hide code depends on the programming language. It may be pretty easy to hide malicious or vulnerable behavior in plain sight in languages like C. For example, in 2003, there was an attempt to add a backdoor into the Linux kernel. The malicious code was just two lines and looked like an error check, but it was spotted because it was added outside the usual approval process. In 2016, a backdoor pretending to be a legitimate core file was found in the content management system Joomla. For the casual reader, the code looked normal.
If you can't find, block it
Likely, you won't be able to detect all unauthorized code. That's why the next best thing is to prevent unwanted behavior. If an application invokes operating system commands, you can whitelist the allowed commands. If the program or the server needs to initiate network connections, you can allow only specific domain names, IP addresses, and network protocols with a firewall as much as possible. Whitelisting the known good is better than blocking the known bad.
You can even try to prevent any malicious components from creeping into your codebase. For example, to prevent the build process from accidentally downloading maliciously modified libraries directly from the internet, you can use a restricted company repository where the components have been more thoroughly checked, and the source repositories are trusted. Of course, this can complicate things if you are creating new functionality and need to include new libraries.
If you can't block it, monitor for it
Blocking is not always a feasible solution, or there might be gaps, so you should also attempt to monitor host and network activity for abnormal or malicious behavior. Enabling and collecting logs from all hosts and applications to a centralized place is the first thing to do. Collecting the logs is not enough, though. You need to monitor them as well by creating alerts, for example, about specific error messages, a large number of failed logins, or login activity at unusual times. It is also useful to monitor network traffic as the malicious application might be attempting to call home. You can find more tips about detecting suspicious behavior from my previous blog, Things that security auditors will nag about, part 4: Will you detect a breach? If you need to monitor a large number of hosts and networks, using a dedicated Security Operations Center (SOC) is helpful.
Runtime application security protection (RASP) tools that are something between the monitoring and blocking approaches can also come in handy. RASP sits inside the application or the runtime environment, and it analyzes the traffic and user behavior. In case of an attack, it can block application execution for specific requests or even virtually patch the application.
Happy Easter egg hunting! Hopefully, you will find chocolate instead of snippets of code. And if you want to keep track of what's happening in cybersecurity? Sign up for Nixu Newsletter.