When people think of autonomous cars, the image that generally comes to mind is that of a completely autonomous vehicle that dispenses with the human driver. The reality is more complicated: there are not only different levels of automation for vehicles – the autopilot, for example, is an old resource – artificial intelligence also works inside the car to make the trip safer for the driver and passengers. AI is even responsible for a technology that allows the car to understand what the driver wants to do in the middle of a noisy environment thanks to its ability to read lips.
Ten years ago, most of the work done on autonomous vehicles took place within research laboratories and in teaching institutions. About five years ago, only Google and a handful of companies were testing AI. Today, the pace in that sector is frantic. In California alone, the number of companies licensed to test and operate vehicles without a driver is already between 30 and 50.
The United States and China lead the world race for the development of autonomous vehicles. Germany and Japan, despite being countries famous for the cars they produce, are behind. “The fundamental difference is AI, “said Tony Han, one of the founders of JingChi, a Chinese-based autonomous vehicle company.
The two countries also lead the regulations for autonomous cars. There are three megatrends behind all this interest: the growing popularity of electric cars, the emergence of the shared economy that is behind private transport companies such as Uber and Lyft, in addition to the advances of artificial intelligence.
Most autonomous vehicle companies are developing an appropriate technology for what they call a level 4 roadster. There are five levels of automation for autonomous cars. Level 1 is the most basic of all, it has an autopilot function and has been used for years. Level 5, in which the vehicle is fully autonomous, is the most advanced. Level 4 is a grade below, it is a highly automated level in which the car can operate in certain situations without the intervention or attention of the driver, such as in areas with special barriers or with traffic.
AI inside the car
Danny Shapiro, senior director of the automotive sector of chipmaker Nvidia, said technology companies take the development of autonomous vehicles very seriously because there is so much at stake. “It is not a recommendation engine for Netflix,” he said during the congress. “AI has to be precise.” This means that it requires “extreme” computing capacity and a lot of code, Shapiro said. In the trunk of autonomous vehicles, there are computers and powerful graphic processing units working with deep learning to analyze all the data that is received, to determine things, for example, if an object located in front of the vehicle is a person, another car, a fire hydrant, etc.
Although some time is needed for autonomous vehicles to reach the market, AI is already transforming the interior of cars. The front cameras can identify people inside the vehicle and track the driver’s eye position to see if they are about to fall asleep or if they are distracted, and can even read their lips. Sensors and cameras on the outside of the car work together with the interior technology to increase safety. For example, the car gives a sound warning that there is a “traffic hazard at a crossing” if another vehicle is about to pass a red light. You can also say things like “Attention, there is a bike heading towards the roundabout!” to alert the driver in case he wants to change lanes.
Actually, one of the main objectives of autonomous vehicle companies is to make driving safer. Human error is responsible for 94% of car collisions, said Jeff Schneider, senior engineering manager at Uber. He noted that 50% of the errors that result in accidents were due to recognition errors, the driver was not paying attention or did not see something approaching. The other 50% occur as a result of a decision error: the driver was driving too fast or misunderstood the situation.
Autonomous vehicles can deal with these two types of errors. Recognition problems will be mitigated with the use of sensors, radars, cameras, Lidar (a remote sensing system) and other tools. Cars get to see the 3D positioning of objects and other things around them, receive 360-degree images of high-resolution cameras and access other relevant data, such as the speed of objects. Meanwhile, sophisticated computerized systems analyze the environment to make the right decision.
One way to contribute to precision is to incorporate redundancy systems. For example, if a traffic signal is hidden or unclear for some reason, a measurement resource is activated to ensure that the autonomous vehicle is not confused. Schneider said the map of the car itself warns that there is a sign on the road in that place. In addition, these vehicles analyze a huge amount of data to operate under varied weather conditions such as snow, rain, frost, and flooding.
Autonomous vehicle companies even use computer-generated situations to train the car for going through situations such as a dazzling sunset. “Using several servers, we can generate almost 490,000 km of travel in just five hours, in addition to testing algorithms on all paved roads in the US in just two days,” said Shapiro, from Nvidia.
Of course, they are complex tasks for the vehicle. “Put yourself in the position of the person who writes the code” and you have to take into account people who cross the street, other cars on the road, advertising signs, traffic signs, car lanes, bicycles or pedestrians, between other things. Schneider said: “It is an absolute chaos.”
For the skeptics, who think of the autonomous car as if it were a castle in the air, it would help a lot to review how far autonomous vehicles have come. In the 80s, the NavLab, a project of Carnegie Mellon University, already equipped vans with computers and sensors to go automatically and assisted. In 1995, the project “Crossing America without hands “, from the same university, made 98% of the journey between Pittsburgh and southern California autonomously, in addition to a stretch of 113 kilometers without any human intervention, said Schneider.
In 2000, the university began working with all-terrain vehicles. The novelties added to the roadsters were the GPS and the Lidiar to facilitate the task of detecting objects and circumventing them. It was also at that time that Google realized the potential of autonomous vehicles and launched its autonomous driving project. Since then, AI, machine learning and deep learning have only improved.
Why will the consumer feel comfortable aboard a self-guided vehicle? According to Uber’s experience, which has been testing autonomous vehicles in Pittsburgh and Phoenix, Schneider said, the public seems to be opening up to them. Although there was a certain fear initially that people would get scared with this type of car, “we found the exact opposite,” he said. For example, since the passenger cannot choose an autonomous Uber car, some users try to get them by requesting services in the expectation of achieving it. However, there is something that could delay the development of autonomous cars and their entry into the mass market: the business model. At the moment, it is even cheaper to have a car than to take Uber everywhere.
The Present and Future of AI in Cars
The development of technology in cars is reaching levels that we could not imagine a few years ago. These advances have caused our vehicles to improve greatly in safety and conditioning, but they want to go further. The artificial intelligence is already a primary objective in this sector, so they work to implement it in the best way possible and turn our car into self-reliant.
However, the only claim of experts is not to get the autonomous car. Artificial intelligence can be implemented in many other ways to improve our driving through the learning of our routines. As we have already mentioned, the main objective of artificial intelligence is the so-called deep learning, the total autonomy of the car. However, the mastery of emotions, vital in dangerous decisions when we get behind the wheel, is inimitable. Moreover, it is impossible for a device to store all situations that may occur, so it is difficult to create a pattern for such cases.
Toyota to invest 1,000 million dollars
Artificial intelligence is going to be so important in the future that the most farsighted manufacturers are preparing many resources to develop it. Toyota will invest 1,000 million dollars for five years to create a company called Toyota Research Institute (TRI), located in the Silicon Valley, dedicated to deepen these technologies. Around 200 researchers are being recruited for this important project created in collaboration with the University of Standford and the Massachusetts Institute of Technology.
The TRI will investigate the autonomous driving and behavior patterns of human beings inside and outside the car. Technologies that facilitate driving with special interest in helping the elderly, a growing sector in the main societies, will be sought. Processors will also be created that more accurately differentiate the environment and supercomputers compact enough to be mounted on vehicles accessible to all.
More intelligence, more security
In the field of security, artificial intelligence will allow a car with autonomous driving to better learn its surroundings, calculate risks and act in anticipation of different hazards. You cannot put all the objects, sidewalks, street lamps, buildings or cars parked on every street in the world on a hard disk, but you can now create computers that differentiate objects, living things, and vehicles that your cameras and sensors had not seen before and catalog them in different categories.
Differentiating an animal like a dog from a motorcycle parked on the sidewalk helps the computer calculate if the animal is in motion and the risk of crossing the path of the vehicle. More important is to differentiate a cyclist from a pedestrian regardless of whether they are moving or the size of each one. A car with artificial intelligence can distinguish a sidewalk no matter how high it is, how far it is from the lane or how it is.
The computer constantly learns and understands everything it sees in places it has never been and will react by warning the driver with automatic actions when it is calculated that there is a risk, such as possible accidents or accidents.
Another field that will evolve in the coming years is that related to insurers and intelligent processes. Predicting what is going to happen is essential for a company dedicated to securing our belongings, so all the tools that help assess possible risks are welcome.
For the user, there are also applications that will inform us of what it will cost to fix a car in an accident. In the process, the user will be informed of what it will cost and how much will the insurance cover.
This race towards artificial intelligence in cars has no truce. Until complete autonomous driving is achieved (not expected until the next decade), the improvement of this process is constant. We will wait to see how they face the technological and ethical dilemmas (that human life depends on a machine) and we achieve a transcendental milestone in the history of transport.