AI Machine-Child Approach to Achieving AI Driverless Cars: Grow or No

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Can AI mature from a Machine-Child state to be true AI

Did you play with blocks when you were a child?

If you study cognitive development, you likely know that block playing can be a significant means of formulating various key cognitive skills in children. In addition to the cognition aspects, the physical manipulation of the blocks will tend to aid the maturation of various body agility and coordination skills. There’s also the commingling of the mind and the body in the sense that the child is not solely gaining cognitively and not solely gaining in physical movement but gaining in a synergistic way of the mind and the body working together.

I think we would all agree that the children are not merely learning about blocks. If they were “learning” like most of today’s AI programs, they would only henceforth be able to use their blocks learnings to play with more blocks.

Wouldn’t it be great if you could develop an AI system that was a learning one, which could go beyond whatever particular domain aspect you crafted it for, and it would be able to learn something else entirely, leveraging what it already knew?

This is one of the greatest issues and qualms about today’s AI.

By-and-large, most AI development is being done as a tailoring to a particular domain and a specific problem in-hand. It makes them narrow. It makes them brittle. They lack any kind of common-sense reasoning. They are unable to extend themselves to other areas, even areas of a related nature. Today’s AI systems cannot be self-applied to other domains and nor be expected to learn what to do.

Most of today’s AI systems are each a one trick pony.

Learning Challenge is the Bigger Challenge in AI

In fact, there are some AI purists that suggest we are all right now being distracted by writing these one-off AI systems. How can we make AI systems that can learn and do so far beyond whatever particular learning aspects that we started them with? That’s what are focus should be, these purists insist.

Maybe we ought to be focusing on making an AI system that is like a child.

This AI system begins with the rudiments that a human child has in terms of being able to learn. We then somehow mature that child and get it to become more like an adult in terms of cognitive capability. We could then presumably use this adult-like AI to then be applied to various domains.

This is the crux of the AI machine-child deep learning notion.

It is believed by some that we need to first figure out how to create a machine-child like capability, of which, we could then use that as a basis for shaping and reshaping toward other tasks that we want to have performed. By leaping past this machine-child, you are never likely going to end-up with anything other than an “adult” single domain system that cannot be sufficiently leveraged towards other domains.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that some AI purists are asking is whether or not the auto makers and tech firms are taking the right tactic to developing AI for self-driving cars, and perhaps the AI community ought to instead be taking a concerted AI machine-child approach.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Returning to the AI machine-child notion, let’s consider how AI self-driving cars are being developed and whether the AI purists have a sensible idea that perhaps the AI community is currently off-target of what should be taking place in this realm of AI.

Let’s start by considering how humans learn to drive a car.

Drive Legally at Age 14 in South Dakota

In most jurisdictions, the youngest that you can begin driving a car is around 16 to 17 years of age. There are some rare exceptions such as South Dakota allowing a driver at the age of 14.

Your arms and legs need to reach the pedals and the steering wheel, and you need sufficient command over your body and limbs to appropriately work the driving controls. You need to have the cognitive capability to perform the driving task, which includes being able to detect the roadway surroundings, assess what the traffic conditions are, make reasoned decisions about the driving maneuvers, and carry out your driving action plan. You need to be responsible and take charge of the car. You need to know the laws about driving and be able to perform the driving task as abiding generally by those laws.

How does a human learn to drive a car?

I remember that with my children, they began by taking a class in how to drive. The class consisted of classroom work wherein they learned about the rules of the road and the laws that govern driving. They then got into a car and drove with a driving instructor, along with times that I went with them and coached or mentored them as they were learning to drive. The driving was initially in relatively save areas such as an empty mall parking lot. After this was used, the next step was a quiet neighborhood with little traffic, and then next was streets with a semblance of traffic, and then next was a harried freeway, and so on.

How are we getting AI systems to be able to drive a car?

It is rather unlike the way in which we get a human to learn to drive a car. The AI system is developed as though it is an adult driver and we then test it to see if it can perform as such. There is not particularly a learning curve per se that the AI itself has to go through. Yes, I realize that he Machine Learning (ML) and Deep Learning (DL) is undertaken, but it is done mainly for the capability of being able to detect the surroundings of the self-driving car, such as whether there are cars nearby or pedestrians in the street.

Here’s now the point by some AI purists that pertains to this matter.

They would say that we should be trying to develop an AI system that has the capacity to learn, in the equivalent fashion somehow of what a human teenager does, and we should then use that foundation to essentially teach the machine-child to be able to drive a car. This would be equivalent to the human teenager learning to drive a car.

Notice that I am purposely saying “equivalent” because I want to separate the notion of the AI being the exact equal to a human versus it being of some equivalent nature.

You could suggest that we are currently doing a top-down approach to constructing the AI for self-driving cars and that this alternative is a bottoms-up approach. In this bottoms-up approach, you focus on creating a systems environment that has a capacity to learn, and you then put it toward learning the task at hand, which in this case is driving a car.

Would we be better off going in that direction as a means to achieve an AI system that can sufficiently drive a car that’s the near same as a human that can drive a car?

Turtle vs. Hare Approach to AI System Progress

It’s hard to say. I think it is considered a much longer path due to the aspect that we don’t yet know how to construct this kind of an open-ended learning AI system that could do this. In the classic race of the turtle versus the hare, the top-down approach gets us out-the-gate right away and shows progress, while the bottom-up approach is more like the hare that will plod along slowly.

There are some that assert that we aren’t going to be able to achieve true Level 5 AI self-driving cars and we’ll eventually hit the limits of this top-down approach. At that point, the world will be asking what happened. How come the vaunted true Level 5 was not achieved? If you were to say that it was because we started at the wrong place, it could be a bit disturbing.

Here’s another twist on this topic that you might find of interest.

Maybe the AI purists are right and we need to focus on the AI as a learning system, crafting a machine-child, for which we then advance and progress and mature it into various kinds of adult-like AI systems.

If they are indeed right about this, what is the lowest “age” machine-child AI system that we should be trying to develop?

For the moment, in terms of driving a car, I suggested that we’d aim at a teenager machine-child cognitive level. That seems to fit with the cognitive maturation of when humans learn to drive a car.

Does it seem plausible for AI developers to construct an AI system that magically is at the teenage years of cognitive capability, or do we need to aim at a much younger age for the machine-child that we want to build? The human teenage cognitive skillset already includes the learnings of having played with blocks as a child. It could be that we can’t leap past that when artificially creating such an AI system.

I know it seems far-fetched to consider that you might need to start at the baby or infant level and begin by having an AI system that plays with blocks. From blocks to driving a car? That seems not so related.

Per the AI purists, we might need to focus on developing AI systems at the infant or baby cognitive level and get those AI systems to mature forward from that starting point.

When I mention this notion at AI conferences, there is usually someone that will say that this could lead to a kind of absurdity of logic. I seem to be suggesting that teenage age is too late, so we need to aim at infants or toddlers. But, maybe that’s too late and we need to aim at a baby or even a newborn. But, maybe that’s too late and we need to aim at conception.

At that juncture of the logic, we seem to have hit a wall in that it maybe no longer makes sense to keep going earlier and earlier in the life cycle. And, if we can readily claim that jumping into the life cycle at any point will deny us the earlier learnings, it would seem that we have no choice but to start at the start and cannot merely pick-up the mantle at a later point such as a baby or infant.

Others would say that this is an absurdity of logic reduction and that we can get onto the life-cycle merry-go-round at a place that it is already spinning and still be fine. We don’t need to reduce this to some zero point.

Let’s pretend that we agree to shift attention of the AI community toward developing an AI machine-child system. We hope this will get us more robust adult-like AI systems. We especially hope that it will get us a true Level 5 AI self-driving car system, wherein we are using the AI machine-child to have it gradually become the equivalent of a licensed human driver.

There are other aspects about childhood of humans that we need to wonder whether they are essential to progressing the AI machine-child toward machine-adulthood.

For example, there is a period of time when a child will undergo so-called childhood amnesia. Usually around the age of 7. One theory is that your brain is undergoing a radical restructuring and reorganization, which it cognitively needs to do to get ready for further advancements.

Others say that you maybe don’t lose any of your memories at all. They are all still there in your noggin, under lock-and-key.

In any case, if we build ourselves an AI machine-child that is at a young age of say 3 or 4 years old, cognitively as equated to a human, and if we progress forward the machine-child, will we eventually need it to undertake the childhood amnesia that humans seem to encounter?

The aspects of how to progress forward or mature the AI machine-child gets us into the same kind of bog. For example, a child does not just sit in a classroom all day and night and learn things. They wander around. They sleep. They eat. They daydream. They get angry. They get happy. The question arises whether those are all inextricably bound into the cognitive development.

If all of these other experiences are integral to the cognitive development, we then are faced with quite a dilemma about the AI machine-child.

Whereas we might have assumed we could build this AI cognitive machine and mature it purely in a cognitive way, perhaps we need to have it experience all of these other life related experiences to get the cognitive progression that we want. I’m sure you’ve seen science fiction movies whereby they decide that they need to raise the AI robot as though it is a human child, aiming to give it the same kinds of human values and experiences that we have as humans.

Would we build the AI machine-child and then need to act like it is a foster child and adopt it into a human family? Also, if so, this implies that it would take years to progress the AI machine-child, since it is presumably taking the same life path as a human. Are we willing to wait years upon years for the AI machine-child to gradually develop into an adult-like AI?

I think you can see why few AI developers are pursuing this path and especially as it relates to AI self-driving cars. Imagine that you go the head of a major automotive firm and try to explain that rather than building an AI system today that will drive self-driving cars tomorrow, instead you are proposing to develop an AI machine-child which after maturing it for the next say 15 years it might be able to act as a teenager and you can train it to drive a car then.

Boom, drop the mic. That’s what would happen. You’d get a startled look and then probably get summarily booted out of the executive suite.

For those of you intrigued by the AI machine-child approach, I’m guessing you might have already been noodling on another aspect of the matter, namely, whether there is any top-end limit to the cognitive maturing of the AI machine-child.

In essence, maybe we could keep maturing cognitively the AI machine-child and it would surpass human cognitive limits. It would just keep learning and learning and learning. This takes us to the super-intelligence AI debate. This also takes us into the debate about whether we are going to reach a point of singularity. Of course, you could maybe try to argue that if this machine-child is somehow the equivalent of humans, perhaps it does have an end-limit, as it seems humans do, and the machine-child will eventually reach a point of dementia.

Conclusion

I hope that when you next see a child playing with blocks or riding on their tricycle that you will admire all of the hidden learning and cognitive maturation that is taking place right in front of you, though it might not be evident per se since you cannot peek into their brains. Will we only be able to ultimately achieve true AI if we cannot replicate this same life-cycle of cognitive maturation?

If you believe that we are currently on an AI path to a dead-end, you might find of value the AI machine-child approach. In a sense, we might need to take two steps backward to go five steps forward. The steps forward at this time are maybe going to hit a brick wall. Instead, the AI machine-child might be the means to get past those barriers.

The topic of AI machine-child often gets chuckles from people and they toss off the topic as a crazy sci-fi kind of notion. They might be right. Or, they might be wrong. It’s not so simple as making a hand wave of claiming that the notion has no merits. Even if you don’t buy into the notion entirely, there are bits and pieces of it that might be applied to our AI approach of today.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

Copyright 2019 Dr. Lance Eliot

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store