Driverless Cars As Noise Pollution Spotters, Mobile Ears On The Go

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Noise pollution is serious stuff and the EPA is concerned

I was being serenaded. Standing on the street corner at New York Times Square, I was surrounded by the epic and at times overwhelming sounds of New York City (NYC). I had come to the city that never sleeps to speak at an industry conference on Artificial Intelligence (AI). Opting to walk from my hotel to the conference location, I couldn’t help but hear the nefarious noises of this hustling and bustling famous town.

NYC has a lot of claim to fame, and one of those claims is that it is the noisiest city in all of North America (not just limited to the United States!).

Environmentalists would label it as noise pollution.

Sound is energy. Typically measured in Sound Pressure Levels (SPLs), those city noises are battering your ears. Often, it is customary to use dBA’s (formerly described as A-weighted decibels) as a scale for comparing different kinds of noises. The SPL dBA is a logarithmic scale and you need to carefully interpret the numbers used, realizing that as the numbers increase it is not simply a linear progression.

Let’s consider some sounds that pertain to human hearing. The sound of a pin dropping is 10 dBA. You have pretty good ears to ear that sound. Rustling leaves are typically around 20 dBA, while a babbling brook is about 40 dBA. So far, these are all relatively quiet and readily enjoyable sounds.

Your alarm clock that sharply awakens you in the mornings is likely around 80 dBA. That’s a sound that is not just jarring, it is also perhaps universally hated because of its significance (yes, a sound meaning it is time to get up and go to work, again!). The sound of a jackhammer gets you to about 110 dBA. A gun being fired is probably 160 dBA or more. Those are rather obnoxious sounds and ones that can cause either temporary damage to your ears or have permanent adverse impacts on your ears.

In a somewhat serene suburbia, the average noise level might be around 40 to 50 dBA. Time of day can make a big difference in terms of the overall noise level. There is the Day-Night average level (Ldn) and the Community Noise Equivalent Level (CNEL), used to help compare cities since these metrics encompass the variations between daytime and nighttime noise levels.

A noisy urban or city area could be 60 to possibly 80 dBA, likely being at the higher point during daytime. When you are standing on the street and listening to the city noises, they can somewhat get bundled together and you might not be able to readily distinguish any particular sound..

Noises can of course harm our ears, limiting our ability to hear. In addition, noises can be distracting and dilute or undermine attention.

Noise Pollution is a Serious Issue

According to the Environmental Protection Agency (EPA), noise pollution is a serious issue in the United States. There is a somewhat defacto nationwide noise pollution policy as enacted by the Noise Control Act of 1972. There are numerous local and state ordinances and policies about noise pollution. For federal highways, the Federal Highway Administration (FHWA) promulgates highway noise pollution rules as per the Congressional Federal-Aid Highway Act of 1970.

Some studies suggest a link between noise pollution and the potential for having heart disease, or for having other ailments like high blood pressure, can be bad for your overall health. There are likely a plethora of adverse health consequences that we can list due to noise pollution.

One aspect that many locals might not know is that there are ways to fight back against noise pollution. Typically, it involves contacting your local city noise enforcement team, consisting of government workers that can officially check on a noise polluter and do something about it, including fining the noise offender or taking other legal actions against them. You would not normally use an emergency number such as 911 to report such noise pollution instances, and instead would likely use some government reporting number such as 311.

In the case of NYC, they receive an average of around 800 reported noise complaints per day. This is likely just the tip of the iceberg in terms of how many people are genuinely frustrated and concerned about noise pollution aspects.

Noise pollution can be vexing due to:

  • You might not be fully aware that noise pollution is occurring and might have become used to it or consider it nonthreatening.
  • You might not have any reliable means of formally detecting the noises and nor are registering them to know how bad they are.
  • The noises might be blended with an array of noises and you are unable to readily isolate the worst of the noises from the others in the mix.
  • You might not be able to trace the various noises to definitive corresponding sources.
  • And so on.

NYC Noise Pollution Study Using Machine Learning

There’s an interesting study being undertaken at NYU and Ohio State University that seeks to sound out the noise pollution issue in New York City, funded partially by an NSF grant (researchers include Bello, Silva, Nov, Dubois, Arora, Salamon, Mydlarz, and Doraiswamy). They have been putting together a system they shrewdly call SONYC (Sounds Of New York City). Via the development of specialized listening devices, they have so far deployed 56 of the sensors in various locations of NYC, including Greenwich Village, Manhattan, Brooklyn, Queens, and other areas. I’ll have to keep my eyes peeled to spot one, the next time I make a trip to NYC.

Their acoustic sensor is relatively inexpensive, costing to-date about $80 each and with the hope of further reducing the cost, which is crucial if there is a desire to deploy such devices on a widespread basis. Affordability of being able to conduct a noise pollution watchdog capability is a key factor in being able to undertake such initiatives.

These SONYC project devices are small and able to be placed on ledges, attached to poles, affixed to buildings, and placed in other areas that might be handy for detecting noise pollution.

The snippets of audio are especially useful for another key aspect to their efforts, namely the use of Machine Learning (ML) to analyze noises and the collected sounds data.

For the SONYC study, the researchers opted to see if they could do some labeling of data, providing therefore a leg-up for their Machine Learning training efforts. As a type of experiment, they sought participants via Amazon’s Mechanical Turk, amassing over 500 people that aided annotating of the audio data presented.

Their study touches on a number of fascinating elements. The training of the Machine Learning or Deep Learning capability brings up the notion of doing crowdsourced audio soundscape annotating.

In the case of this particular study, I am guessing that some of you might be wondering whether it might be possible to use modern-day smartphones as a means of capturing the sounds that might then be analyzed by the ML or DL for noise pollution purposes. The use of a smartphone as a sound collecting device presents other problems, including the lack of precision and calibration for sound sensing, along with whether the smartphones would be used by humans as they are walking around or opting to try and point out a noisy place, offering therefore likely intermittent, inconsistent, and somewhat suspect audio data.

AI Self-Driving Cars Can Help With Noise Abatement Efforts

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that we’ve been actively exploring is the use of audio sensors on a self-driving car to aid in the AI being able to drive the vehicle, such as detecting sirens of police cars and other matters. The noise pollution studies dovetail into this kind of effort.

To-date, few of the auto makers and tech firms are giving much attention to the use of externally focused audio microphones for an AI self-driving car. Indeed, they would generally classify the use of such sensory capabilities as an edge or corner problem.

Why is the use of external audio microphones tossed into the edge or corner cases basket? Mainly due to the aspect that the AI developers already have their hands full on the other elements of making an AI self-driving car work as hoped for.

Dealing with the sounds that are outside of the AI self-driving car are, well, interesting, but not essential, right now, in the view of many AI developers.

One obvious example of how external sounds are crucial involves the sirens of police cars, ambulances, fire trucks, and other emergency vehicles. Human drivers are supposed to be alert for the sound of such sirens. When such a siren is heard, the human driver knows to be cautious of any emergency vehicles that might be in their vicinity. The louder the siren sound, likely the closer the vehicle is to the car.

One would hope that with AI self-driving cars we are trying to make them as safe as feasible. Omitting the use of a sense that us human drivers use, the sense of hearing, seems like a rather apparent omission. So, let’s agree for the moment that though the auto makers and tech firms are not jumping yet on the bandwagon of using exterior audio microphones, they will gradually and inextricably get there.

When will the external audio microphone sensors be active on an AI self-driving car? My answer is straightforward, whenever the car is in-motion, and be on the alert for when the car is not in-motion, such as hearing a person speaking to the AI that they want the self-driving car to activate (akin to speaking to Alexa or Siri).

There are some advocates of audio sensors that say they should only be sparingly in use. Their concern is that if you have these audio sensors on so much of the time, it implies that the AI self-driving car is potentially capturing all sounds wherever it drives and wherever it might be parked. This could be a kind of privacy invasion.

Roaming AI Self-Driving Cars and Triangulating Noises

My discussion about the SONYC approach indicated that there is noise pollution that exists and has health and economic adverse consequences. One such approach involves the effort by the SONYC researchers of developing and deploying low-priced sensory devices that could be placed throughout a geographical area to get a systematic collection of the noises and then be further leveraged via the application of appropriate Machine Learning or Deep Learning systems to analyze and interpret the audio data.

Another potential approach involves using the exterior audio data being captured by AI self-driving cars.

Imagine an area like downtown Los Angeles. Suppose we had a slew of AI self-driving cars that were roaming up and down the streets, serving as ridesharing services. While driving around, they are capturing visual imagery data, radar data, LIDAR data, ultrasonic data, and let’s say also audio data.

Each of those AI self-driving cars has a potential treasure trove of collected audio data. Of course, the audio data might be either top-quality or low-quality, depending upon the type of audio sensors included into the AI self-driving car. How many such audio sensors on any given AI self-driving car will also be a factor, along with where on the self-driving car those audio sensors are placed.

I am not suggesting that it is axiomatic that the exterior audio sensors will be able to provide valuable audio data for noise pollution abatement purposes. But it is one additional avenue worthy of consideration.

Since AI self-driving cars will likely be separated into “fleets,” meaning that a particular auto maker might have their own cloud, while other auto makers have their own respective clouds, it might make trying to cohesively bring together all of the collected audio data somewhat problematic. This would need to be worked out with the auto makers and tech firms.

There are a number of interesting twists and turns to this notion.

One important element is that the AI self-driving cars are going to be in-motion most of the time. Whereas the use of low-priced geographically placed audio sensors will typically be fixed in place for some lengthy period of time, the audio sensor on the AI self-driving cars are going to be on the journey of wherever the AI self-driving car goes.

Can this audio data be useful when it is being captured while in-motion on an AI self-driving car? I am not asking about whether it can be useful for real-time analyses, which I’ve already mentioned, but instead pondering whether the audio data collected might be skewed in ways that are as a result of the audio being captured by an in-motion audio sensor.

It is a potentially interesting Machine Learning or Deep Learning problem to include that the audio data was captured while the device itself was in-motion. This would seem to also likely require added forms of audio data transformations.

We also need to consider the triangulation aspects.

I am referring to the notion that on any given AI self-driving car there might be several of the exterior audio sensors. It would seem sensible to try and compare the audio captured by those multiple sensors and try to piece together what the self-driving car has captured of the noises that surround it. The audio sensor data captured at the front of the self-driving car could be meshed or triangulated with the audio sensors data captured at the rear of the self-driving car, and so on.

There would also be an interesting problem of triangulating the audio sensor data from a multitude of AI self-driving cars.

In a manner of speaking, you could say that the noise data collection is nearly “free” because the AI self-driving cars are going to presumably have the audio sensors anyway, if you agree with me that they should. Though the audio sensors weren’t necessarily included in the AI self-driving cars to aid noise pollution abatement, it is an added benefit to having those audio sensors for their otherwise purposes for the AI self-driving car functions.

Conclusion

There is a potential that AI self-driving cars could aid the emerging noise pollution abatement efforts.

There are lots of privacy issues to be dealt with in this audio data collection. Will the humans that own or use these AI self-driving cars be comfortable with having the exterior audio kept and used for these noise pollution abatement purposes. How much of the time and when will the audio data be captured from the sensors?

There is a slew of difficult technical issues to be dealt with too.

The amount of data would be possibly staggering. Even with compression trickery, you are still referring to the audio sensors collecting data potentially non-stop, every second and every minute and every hour of every day and doing so for hundreds or thousands of AI self-driving cars (ultimately, perhaps millions upon millions of self-driving cars).

In any case, the efforts to achieve noise pollution abatement are an avenue of improving the use of Machine Learning and Deep Learning for doing audio data pattern matching and analyses. For those purposes alone, it’s a handy and helpful endeavor, let alone the noise abatement matter.

Those methods can be likely reused or borrowed for doing the types of audio data analyses that the auto makers and tech firms directly care about, such as separating out the sound of a siren from other city noises and trying to determine where the source is. That’s the kind of noise analysis that makes abundant sense for the safety of an AI self-driving car and we need more efforts to enhance what are rather rudimentary capabilities today. I say, let’s make some loud noise in favor of that.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2019 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store