Hardware strengthens the security of medical devices


The medical device industry has changed over the past decade, fueled by an explosion of the Internet of Mobile Things and increased connectivity. As the complexity of the technology, supply chains, and management of these devices increases, so too do security concerns. Since traditionally they do not benefit from connectivity or security due to ambiguity, today’s medical devices are complex systems with multiple layers of raw material-based hardware and software. As a result, medical devices are now more vulnerable to common threats targeting mainstream software libraries and operating systems such as Windows and Linux. In fact it is loud “Health Injury Report 2021“In 2020, attacks on medical devices increased by 55%.

As the threat landscape continues to grow and become more complex, the medical device industry has worked to improve security effectiveness. From a cybersecurity perspective, this takes place, for example, in the form of threat models that can determine the unique risk profile of a medical device. This unique risk profile then feeds into the design and implementation of security controls to mitigate those risks (and gain Food and Drug Administration approval). These types of security controls are usually ingrained in software.

But today, new microprocessor technologies (such as secure enclaves and cryptographic acceleration) allow hardware to play a more important role in medical device security. How could moving to hardware-based security controls on these devices help?

For the past decade, medical devices often used custom operating systems or simply ran on bare metal, adding security through confusion. But as these devices have matured, there has been a massive shift towards standard operating systems and standard communication libraries. While wild stories of attacks on medical devices could steal the headlines, but in reality, resource-based vulnerabilities pose the greatest threat to medical device security today.

Medical device manufacturers often focus their security efforts on locking down their proprietary software, which is important but leaves other layers of software open. As the industry matures, there is a growing concern that security controls that exist only in software can be undone in the same software. This exposure is driving the relocation of certain software functions (or variables) to hardware trusts, where they can be better protected and signed. Let’s look at two examples I’ve worked with.

Inhalers first. A major problem with systems that use consumables, such as inhaler systems or laboratory test equipment, is counterfeit or refilled consumables / cartridges. Similar to printers, these systems generate their income from the consumables (like the inhaled medication) and not from the inhaler itself. Software-level security solutions have been re-engineered to enable copycat and refilled cartridges. Both posed a health risk for the patient, but also a considerable financial loss for the manufacturer.

Manufacturers had to figure out how to move counterfeit and tamper protection to an unchangeable level, the hardware level. The solution used hardware-ingrained cryptographic keys that were burned in at manufacture to verify the authenticity of each cartridge, and then used disposable hardware counters to keep track of dose counts remaining. These controls prevented a used cartridge from being refilled (since the counter for the remaining dose could not be increased) and the use of non-genuine cartridges was accepted by the system.

Debugging functions
Another area is medical device debugging capabilities. Some security professionals would prefer to remove all debugging features (such as JTAG and SPI) from these devices. But right now, those who support the devices for manufacturing and service are using them for access. A great example of this is prescription medical devices – something that is prescribed to one patient and then returned for use by another patient. This can include home devices like sleep study devices, diabetes monitors, mobile EKGs, and more. After use, the device often goes back to the manufacturer for an overhaul and reset, using the debugging ports to completely reflash the system as if it were going through manufacturing again.

However, simply resetting the configuration to the software application level may overlook the risk of manipulation that may have gone beyond the patient configuration (e.g. manipulation of boot parameters, BIOS settings, system IDs, network information and activated operating system services). The safer solution is to use the debug ports to essentially reflash the device as if it were going through the initial manufacture (don’t trust the system). This process often involves newly provisioned crypto keys as the status of the current one is unknown.

But what if we go one step further and move the trust base further to the hardware level, so that while a device may be in the hands of a malicious patient, it cannot be fundamentally changed? Or can crypto keys not be manipulated or extracted? At this point, hardware trusts and functions such as Trusted Platform Modules (TPMs) can help ensure that debugging ports no longer have to remain open.

There has been great growth in the area of ​​medical device safety in recent years. As it continues to grow and evolve, it will be important to move security down to the hardware and firmware tiers. To make this a reality, manufacturers and their technology partners are working together on new solutions.

Leave A Reply

Your email address will not be published.