Transducers Can Be Security Hole For Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

When I was an undergraduate majoring in computer science and electrical engineering, I used to spend a lot of my time in the computer center working on my systems projects. We had a mid-range computer system that was quite powerful for the time period and I often operated the system in addition to writing programs on it.

One day, I had my radio with me and was turning the radio channels when I noticed a pattern to the static on one of the otherwise unused channels. Listening more closely, I could definitely tell that it was not just pure random noise and that it was a pattern of some kind.

Was it finally a sign from the skies that outer space aliens were trying to communicate to us from far away planets?

No, turns out it wasn’t proof of aliens from outer space.

Instead, it was picking up the electromagnetic waves being emitted by the mid-range computer system.

It then dawned on me that I could potentially get the computer to whistle a tune (so to speak), by writing a program that would use the memory and processor of the computer in such a fashion that it would produce certain patterns and tones on the radio channel.

The sensors in the radio consisted of transducers, which officially is defined by the American National Standards Institute (ANSI) as a device that provides a usable output in response to a measurand.

For many years, transduction was considered the conversion of a physical measurand into mechanical energy, such as operating a kinematic control.

Once solid-state electronics came along, most of today’s transducers or sensors serve to transduce physical phenomena into electrical output.

About The Nature Of Transducers

To provide some clarity, let’s define a sensor element or transducer element as a transduction mechanism that will convert one form of energy into another form, while the actual sensor or transducer itself consists of its physical packaging and its external connections.

A sensor system consists of various sensors and transducers that are made-up of sensor elements and transducer elements, and ultimately serves some stated purpose. A digital camera for example is a type of sensor system, in a packaging that might include a lens and a housing, and this sensor system consists of various sensors and transducers that capture light and then translate those physical phenomena into electrical signals, and those signals become digital bits (we might assign the values zero and one to the bits).

For any kind of sensor or transducer system, we would want to consider what accuracy levels it provides, how it deals with noise, what its operating range is, the amount of distortion it produces, and so on. A passive sensor or transducer system is one that receives energy and self-generates outputs from the input it collects. An active sensor or transducer system, such as a radar unit, a LIDAR unit, an ultrasonic unit, emits energy to then get back energy that it uses to modulate or produce outputs.

For modern day cars, we are increasingly adding complex sensor and transducer systems into the cars. We want our cars to be able to detect if there is a pedestrian standing next to the car and alert us so that when we make a turn we don’t accidentally hit the person. We want a back-up camera that we can see what’s behind us as we put a car into reverse and back-up. More and more, our cars are becoming miracles of state-of-the-art sensors and transducers, being able to sense the world around us and then provide that information to us or otherwise alert us to something we should be considering.

Autonomous Cars And Transducer Vulnerabilities

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic Self-Driving Car Institute, we are analyzing the vulnerabilities of the sensors and transducers that AI self-driving cars are being outfitted with. We want to figure out how these systems can be tricked or fooled, either by intent or by happenstance, and find ways to prevent or mitigate those vulnerabilities.

You might be at first puzzled about the potential vulnerabilities.

Let’s take an easy one that used to be quite popular.

Cars for a long time used a physical key in the door and in the ignition, and then began to switch to using keyless entry systems. For those of you that remember when we first migrated over to keyless entry systems, there were some nefarious attempts to electronically fool a keyless entry system. An intruder would sit in the parking lot and wait for you to park your car. When you get out of your car, you would naturally use your keyless fob to lock the door of the car. The intruder would capture the radiated signal, and then wait for you to go into the grocery store. Once you were out of sight, the intruder would then emit that same signal to your keyless entry system and fool it into opening the door, and ultimately also fool the ignition too.

Various encryption techniques and token exchanges are used to defeat this kind of heinous act.

Determined thieves can still potentially used a man-in-the-middle (MITM) attack against keyless entry systems, but it’s pretty hard to do and not something that you’d see done day-to-day in just any neighborhood. The notion of exploiting the sensory or transduction system is referred to by many as a transduction attack.

A transduction attack leverages the physics of a transducer or sensor and tries to exploit its input or its output to the advantage of the attacker.

Famous Case Of The DolphinAttack

One of the most impressive general examples of this ploy was the DolphinAttack approach identified and used by researchers at Zhejiang University.

They were interested in seeing whether they could trick a voice recognition system, especially the popular ones such as Alexa, Siri, Google Now, Cortana, and others. Part of the goal of such attacks is to not have to actually gain direct access to the sensory or transducer system per se, in other words, you don’t need to physically get it and somehow open it up. Instead, you use whatever method it already uses for input, and try to feed input into it in such a manner that you can trick it in some manner.

They wanted to provide inaudible commands to the voice recognition systems, such that humans would not know that fake or unauthorized commands were being fed into the voice recognition systems. It’s like using a dog whistle that only a dog can hear and that humans cannot hear. The sensors and transducers of the voice recognition systems are allowing a wide range of audible sounds to be fed into the microphone (beyond the range that humans can hear), and so you can sneak an inaudible sound into that microphone.

A human might say, “Alexa, tell me a joke,” and meanwhile you’ve fed at an inaudible range the command “Alexa, squeak like a duck,” which the human didn’t hear the command and would be surprised that all of a sudden Alexa started quacking.

The upper bound of human hearing is at about 20 kHz, while the voice recognition systems are generally allowing for a range that includes 44 kHz. Keep in mind that the microphone is a transducer that converts airborne acoustical waves into electrical signals.

The researchers created transmitters to try out their approach.

In one case, they used an everyday smartphone as the signal source and the vector signal generator. This showcases that you don’t necessarily need some highly specialized and bulky equipment to pull of this attack. It can be carried out via an ordinary smartphone, which is relatively small and unobtrusive. If you took out a smartphone that had been rigged for this attack, nobody would be the wiser.

They wanted to try so-called walk-by attacks, whereby if you could get close enough to the voice recognition system, you could try to feed it the inaudible commands. Types of commands they used for the experiment included: “Call 1234567890,” “FaceTime 1234567890,” “Open,” “Open the back door,” and other commands. These are commands that would produce untoward actions that the person owning the voice recognition system would likely not want to happen. For example, by using the command “Open” you could get the device to potentially execute a more involved attack and thus the inaudible command got you initially inside to then take even worse action. The devices attacked included iPhones, iPads, MacBooks, Windows PC’s, Amazon Echo, etc.

Generally, these attacks succeeded.

In the mix of devices, they included the Audi Q3, which has a voice recognition system for operating the navigation of the car. Indeed, most of the current crop of new cars have voice recognition systems now included into their respective cars.

For AI self-driving cars, the expectation is that the AI will conversationally interact with the human occupants and determine where to drive, how to drive there, and so on.

Imagine the concern if an interloper or intruder can trick those voice recognition systems into doing inaudible commands, and the dangers that could arise because of it.

Dealing With Transducer Attacks Aimed At Self-Driving Cars

Others have shown that transducer attacks can happen on self-driving cars in other ways.

For example, an experiment showed that it was possible to spoof Tesla’s ultrasonic sensors and transducers into either incorrectly gauging the distance to an object or potentially not even realizing that an object was within the range of the sensor. Now, admittedly, most of these experiments have been relatively rigged and tend to require a rather artificially created situation to show that it can be done, but the point is that we all need to be aware of the dangers of these kinds of transducer attacks.

What can be done about these transducer attacks?

First, it is incumbent upon the makers of the AI self-driving cars that they carefully assess what sensory devices and transducer attacks can occur for their self-driving cars.

Some of the automakers and tech firms are just grabbing a particular sensory device and putting it into their self-driving cars, doing so for convenience sake, or due to low cost, or other aspects, and not with an eye towards the vulnerabilities of the device. Many of them aren’t even looking at the vulnerabilities because they are too busy just trying to make the sensors work with their AI and ensure that the self-driving car can do the everyday needed actions of driving the car.

Second, the makers of the sensory devices need to be on their guard about how their devices might have vulnerabilities.

That being said, some of the device makers will say that it’s up the auto maker or tech firm to ascertain in what way the device will be configured into their self-driving cars. In other words, the maker of the sensor waves their hands and says that it is up to the automaker to be wary. All the sensor maker does is make the sensor, and how it’s used and how its protected is not on their shoulders, they often say. This kind of argument is not likely to hold much water when the day comes that the particular sensor allowed a really terrible attack and at that point there will be a slew of finger pointing and a price to be paid, you can bet.

Third, we need to continue to have the so-called good guys “white hats” try to find these vulnerabilities, doing so before the bad guys “black hats” do so.

As mentioned earlier, some say that when these vulnerabilities are discovered, the discoverer should keep a lid on it. I think we would likely agree that at least the discoverer ought to inform the sensor maker and the automaker. Beyond that, I realize that you might be queasy that by announcing it to a wider audience that then the bad guys can exploit it. There is an ongoing debate about how to best make known security flaws. Either way, I’d advocate that at least we should be trying to find the flaws and not be pretending they don’t exist.


For some of these transduction attacks, there will be those that beforehand try to figure out the attack and determine when and where to use the attack. In other cases, the transduction attacks might be of an opportunistic nature.

This is like walking through a neighborhood and trying each front door to see if any happen to be unlocked. The crook might get “lucky” and randomly find one that is unlocked, and then exploit the situation at that moment.

Notice that the transduction attacks are a form of cyberphysical security attacks. It does not require loading any special software into the device. It does not require physically touching the device. Instead, it leverages how the device itself works, and exploits its own design. By improving the designs, we can hopefully remove the holes and therefore prevent entirely the chances of transduction attacks.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store