When AI Is At The Driving Controls Of Self-Driving Cars, There Are Complexities Galore

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Which is better, a lead foot on the brakes and a light-foot on the gas, or a lead foot on the gas and a featherweight foot on the brakes?

Hard to say.

If you are trying to drive onto the freeway, you usually need to double down on the gas pedal and make sure you enter into traffic at a fast and equitable speed.

If you are driving in a busy mall parking lot, probably best to keep your foot leaning on the brakes so that you don’t hit anyone.

Novice drivers though aren’t quite sure how to treat the car controls.

It was somewhat comical one day to watch as a teenager drove down our street and his car seemed to start and stop. One moment the accelerator was being pushed, the next moment the teenager plied on the brakes.

I’m sure we’ve all had the same experience when trying to learn to drive.

Dealing With Car Driving Controls

Whenever I rent a car, which I do a lot of the time due to my work travel, I often discover that during the first few minutes of driving the rental car that I am over-controlling it.

I need to initially get used to how the brakes react, how the accelerator reacts, and how the steering reacts. It doesn’t take long.

In referring to the car controls, it is simplest to focus on the brakes, the accelerator, and the steering wheel.

Those of course aren’t the only car controls you deal with.

You need to start the car.

You need to put the car into gear, perhaps using reverse to back out of your garage, and then place the car into drive to head down the street that you live on.

You could suggest that your turn indicators or blinkers are also part of your car controls.

Would you say that your headlamps are part of your car controls?

It seems a bit of a stretch. You could though argue that they are important to being able to see the road ahead, especially at nighttime.

Would the parking brake be a form of driving control?

Evolution Of Driving Controls

Focusing on just the brakes, the accelerator, and the steering wheel, let’s consider how you make use of those particular driving controls.

At a tactical level, it’s apparent that you use the brakes to slow down the car.

You use the accelerator to speed-up the car.

You use the steering wheel to redirect the direction of the car.

Novice drivers aren’t at first sure of which pedal is the brake and which is the gas. They often get confused about which is which. They are also unsure about whether to use their left foot, their right foot, or maybe both feet to control the pedals.

What’s interesting about the history of car controls is the evolution to what we have today.

In the United States, for example, we earlier in our history had the driving controls on the right side of the car, rather than the left side. This is surprising to most people here in the U.S. What, the driving controls were the “wrong” way, some ask? Note, wrong meaning that those with driving controls on the left side tend to think that’s the proper placement, while those that have their driving controls on the right side tend to think they have the proper placement.

Overall, in a kind of Darwinian process, we have landed upon a set of car controls that seems to work for us all.

We have evolved cars to a point that the everyday person can drive a car.

It has all boiled down to a pedal to make the car stop, a pedal to make the car go, and a wheel that you can twist and turn to steer the car.

That’s about as basic or fundamental as you can be. .

You might find of interest that there have been studies done about moving or changing the nature of the driving controls.

There are numerous studies about how long it takes to make use of the car controls.

Length of time to invoke car controls can be a life-or-death matter.

In our minds, we often blur the distinction between acting upon the car controls and the action of the car complying with those controlling actions. Only when you are in a dire situation do you at times become aware of the difference. Within your mind, you might be thinking that the car can stop on a dime, but by the time you move your body and get your foot to pressure the brake pedal, followed by the brakes being actually applied, followed by the tires being engaged by the brakes, followed by the physics of the road and the tires bringing the car to a halt, it can be much longer than you think.

Tactical and Strategic Uses

This brings up another point about the use of the car controls.

There are the tactical aspects of activating and using the controls, such as putting your foot onto the pedals and using your hands to turn the steering wheel. You normally though are making use of the controls in a more macroscopic way too, at least hopefully you are doing so.

You want to drive down the street, reach the corner, make the turn, and do this without hitting other cars. Doing so has required a series of back-to-back tactical car control command efforts.

Novice drivers often struggle with this overarching aspect.

I know that most of you are likely seasoned drivers and perhaps take for granted how easy it is to not only use the car controls, but also tie them together into a series of efforts to achieve a larger goal such as driving down the street to make a turn.

AI Self-Driving Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. An essential aspect of AI self-driving cars is the use of the driving car controls by the AI system. There is more to this than perhaps meets the eye.

I’ll focus on the use of the car controls by the AI system solely, and not cover much about what happens when the AI and the human are trying to both deal with the car controls.

As mentioned earlier, this co-sharing is especially problematic.

Imagine that you were driving a car and another human sat next to you in the front seat, having another set of driving controls, and you both could each drive the car in terms of opting to use any of the pedals and turn the steering wheel as you wished. Mind boggling. The two of you would need to really be on the same wavelength and seek to avoid undermining each other. Consider too what would happen when an emergency arose. That makes any kind of coordinated effort even more arduous.

Shifting though to the use of a true Level 5 AI self-driving car, I’m going to walk you through some salient facets of what the AI must do about the car controls and issuing of car control commands.

Six Major Steps Involved

There are six major steps involved in generating and enacting of the car controls commands:

  1. Determine car control commands to emit
  2. Emit car control commands to ECU
  3. Verify that car control commands were received and are viable
  4. ECU instructs the automotive elements
  5. Automotive elements physically enact the received commands
  6. Ascertain that car has reacted to the commands

For ease of discussion, we’ll assume that there is an ECU (Electronic Control Unit) that translates commands given to it from the AI system and converts those commands into some set of specific operational activities for the car.

The ECU has the task of conveying the operational activities to a myriad of other subsystems that are in the car, including the Brake Control Module (BCM), the Central Timing Module (CTM), the Transmission Control Module (TCM), the Powertrain Control Module (PCM), the Engine Control Module (ECM), etc.

There could easily be one hundred or more such sundry control subsystems that each needs to be properly communicated with and instructed on what needs to be done.

One important point about this notion of having the AI issues commands at a high-level of abstraction is that you can potentially port the AI system over to other brands and models of cars. If you embed directly into the AI system the specific protocols of a particular model and brand of car, it will likely make it much harder to port the AI system to other cars.

By modularizing these aspects and keeping the AI above the fray, you are usually able to more readily port over the AI system.

That being said, let’s not kid ourselves. If the AI is far removed from the nature of the underlying car brand and model, it is possible that the AI system won’t be able to issue commands that might be feasible for the particular car that the AI is working on.

Suppose the AI system emits a car control command that basically asks the car to accelerate from 0 to 60 miles per hour and the AI assumes that this can be done in let’s say 3 seconds. That’s the speed for sports-oriented cars and it isn’t going to work out well for more everyday cars.

Thus, the AI system will likely need to be versed in some aspects of the brand and model car that the AI is running on.

Another consideration for the AI system involves the type of network into which the car control commands is going to be conveyed.

Typically, most cars use the Controller Area Network (CAN) vehicle bus as the standard for electronic communications within the car and between the myriad of subsystems.

This message-based protocol is both loved and reviled. First conceived of and released as a formal protocol in the mid-1980s, it has expanded and adapted over the years. There are numerous complimentary protocols that emerged to deal with facets such as device addressing issues, flow control capabilities, and other matters. Weaknesses and qualms often center around CAN’s lack of robust security features and difficulties that can ensue when doing troubleshooting of CAN-related problems.

Generally, it is best to try to keep the AI system above the fray about the CAN network, though there needs to be a healthy dose of skepticism built into the AI about what happens once messages are flowing in the CAN and throughout the self-driving car.

The AI cannot assume that there will be a perfect conveyance of messages.

The AI cannot assume that the conveyance will necessarily happen in as timely a manner as might be otherwise expected. The real-world limitations need to be encompassed by however the AI is going to be expecting the car controls commands to be carried out.

In fact, let’s look briefly at each of the six major steps and consider the types of errors or problems that can arise.

In the first step, determining the car controls commands to emit, it is conceivable that the AI might fail to arrive at a set of car controls commands that it wants to have performed.

Perhaps the AI gets gummed up trying to decide what car control commands to use. Maybe the AI hits a snag in the processing, or maybe there’s a bug in the system, or maybe the circumstance of the status of the self-driving car has baffled the AI.

Those are obviously bad possibilities.

For the second step of emitting the car controls commands, there is a possibility that the commands might be garbled by how they have been formatted or during their conveyance to the ECU.

This sets up a rather dangerous situation. If the commands perchance are not intelligible when reaching the ECU, there’s a good chance that the ECU will realize something has gone awry, but if the commands are perchance intelligible, the ECU is likely going to try and act on them, though they aren’t what was emitted. In essence, the receiving of a wrong set of commands is bound to be worse than commands that are so unintelligible that they are obviously incorrect and improper.

Step three is an effort to checkpoint that the car control commands have indeed been received and an attempt to verify they are what was actually intended.

This is a last-moment layer of defense against executing car controls that weren’t what was emitted to be undertaken.

Note though that this checkpoint is not second-guessing the first two steps, since even if those steps have provided commands that might get the car into an untoward traffic situation, that’s not what this third step is trying to ascertain.

In the fourth step, the car controls commands are translated into the myriad of other electronic messages that must be sent along the CAN to the subsystems of the self-driving car. This is when the physical operational activities are being established based on the car controls commands that were provided by step one and step two, and were verified in step three.

There are lots of opportunities for things to go south at this juncture. Imagine sending boats along a river with lots of tributaries, and any of those boats might go astray.

During step five, the operational activities are now being carried out, such as the brakes being applied to the tires and the car beginning to slow down or the accelerated applied and the car starting to speed-up. Assuming that the car controls commands actually reached the subsystems in step four, this step five is the actual enactment of those commands. Things can go wrong. Suppose the brakes aren’t working right? Suppose the engine is not responsive?

At step six, there is a need to ascertain that the car controls commands were carried out.

As a human driver, when you wrench the steering wheel to a hard right, you can feel as the car makes the right lurching motion. This is your way of ascertaining that your command, the steering wheel movement, got translated into the actual operational and physical outcome.

The AI system has to do the same kind of sensing to realize whether the car controls commands were executed, which requires using the sensors such as the cameras, radar, LIDAR, and internally focused ones like the IMU.

Time Is A Crucial Factor

Each of the six major steps takes time to undertake.

The AI system during the action planning portion has to gauge how long each of those steps might take and use that estimation to determine what is feasible to do.

If the amount of time to apply the brakes, let’s say, would exceed what the AI wants to do in terms of the necessity of trying to slow down or halt the self-driving car, the AI would need to ascertain what another alternative might be pursued instead.

For example, if the braking cannot be done in time, would it be possible to turn the steering wheel in time, and avoid whatever collision is about to occur?

As might be rather evident, the AI system cannot just emit car controls commands and walk away from the effort.

As a human driver, you likely sometimes change your mind while driving the car and suddenly do something contrary to what you had just done, such as my earlier example of radically going from speeding up to suddenly slowing down. Can the AI “change its mind” in terms of opting to do something different from what it has already started to undertake?

The answer is yes, the AI can opt to try to change what it was trying to do.

This can be problematic to execute.

If the AI can catch the emitted commands before the ECU starts to push them along to the self-driving car subsystems, those commands can be potentially suppressed.

That’s a kind of undo.

If the commands are already in-flight of being performed by the physical elements of the self-driving car, there’s not much chance of an undo, and instead the AI would likely need to emit a new set of car controls commands, seeking to get those executed right away (such as doing a braking on top of having just done a speeding up action).

Suppose the AI emits commands to turn the steering wheel so radically that it would cause the self-driving car to topple over and roll onto its roof.

What step should catch that aspect?

Even if it is caught, does the act itself mean that it should never be executed?

Perhaps the AI has ascertained that making such a radical turn is worth the risk, namely that it is “better” to turn and roll over the self-driving car versus say ramming into a truck that’s filled with petrol and would explode upon impact.

There are also car control commands that could be emitted that are not possible for the physical capabilities of the car.

It would be better though that such commands never get into the stream and the AI should not be relying on a slim chance hope that infeasible or impossible commands are going to get detected and rejected downstream.

Most of these steps are complex and complicated when you get into their respective details.

Use of Machine Learning Or Deep Learning

One aspect involves using Machine Learning or Deep Learning to ferret out patterns of car controls commands actions.

If there are patterns that can be found, it could make things easier for the AI system and the controlling of the self-driving car.

To explain why patterns of car controls commands might be handy, consider what you do as a human driver of a car.

Let’s say that each morning you back your car out of your garage, doing so in reverse, going slowly, and enter into the street while backing out. Once you get far enough into the street, you turn the wheel toward the end of the street and begin to accelerate. You accelerate somewhat toward the end of the block, and then usually make a right turn. All of this is a series of maneuvers that you do each morning, almost like clockwork.

In fact, the odds are that you do this driving sequence somewhat mindlessly. You are perhaps thinking about work and other matters, rather than concentrating on the driving task.

Let’s put AI into the driver’s seat.

Suppose that the AI self-driving car had used Machine Learning or Deep Learning to examine the voluminous amount of driving actions of the self-driving car over time.

This pattern of each morning doing the same driving routine has a chance of being spotted by the Machine Learning or Deep Learning. If those kinds of driving patterns are identifiable, the AI could incorporate those sets or subsets into a collection or library of known driving patterns. These patterns could then be invoked when needed.


Car controls commands are essential to the operation of a self-driving car. They don’t get the kind of media attention that you see going toward the sensor’s aspects of self-driving cars. Nonetheless, if the AI system and the car controls commands portion aren’t properly aligned, it can be a dangerous and very untoward situation.

Though the car control commands aspects are primarily automotive engineering based, there are opportunities to add AI into the mix. One approach involves examining large datasets of car controls commands emissions and trying to find useful patterns. Caution needs to be exercised in doing so. There could be patterns that are not viable for reuse or that are only reusable in quite narrow circumstances.

As a human driver, you are continually issuing car controls commands. They are coming from your brain, going to your limbs, and then involve using the pedals and the steering wheel. The use of the pedals and the steering wheel are then translated into the use of the car subsystems. Once those car subsystems undertake their efforts, the car physically attempts to perform those driving efforts.

It’s all a dance of the human driver and the car.

The same kind of dance has to happen with the AI and the self-driving car.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2020 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store