Featured Posts
Recent Posts
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

An understanding of self-driving cars

Definitions An autonomous drive/automated vehicle is a vehicle that drives itself.

This is a very simple definition, and there are grades of autonomy.

Here are the five autonomous drive levels, according to the Society for Automotive Engineers (SAE) definition. (Note that Level 0 is not autonomous at all.)

  • Level 0 : No autonomous vehicle controls, all control by humans

  • Level 1 : Human drivers control the critical driving tasks, may get minor technological assistance. This assistance includes cruise control or stability control.

  • Level 2 : Vehicles take over both steering and acceleration/deceleration capabilities in fixed scenarios. The driver is still in control over the vehicle at all times

  • Level 3 : Vehicles safely control all aspects of driving in a mapped environment. Human drivers still need to be on board monitoring and managing changes in road environments or unforeseen scenarios.

  • Level 4 : No driver interaction is needed. A level 4 car can stop itself if the systems fail. These cars will be able to handle driving from point A to point B in most use-cases. However, humans can still take over if so desired.

  • Level 5 : Besides controlling the destination, humans have no other involvement in driving a level 5 car – nor can they intervene.

Currently, Level 0, Level 1 and Level 2 vehicles are available to consumers. Some Level 3 and Level 4 vehicles are being trialled in restricted areas. There are no Level 5 vehicles anywhere.

An autonomous vehicle relies on multiple sensors of various types (with built in redundancy), and highly developed Artificial Intelligence systems.

What is the technology behind self-driving vehicles?

Neural networks form the basis of these vehicles.

Neural networks are artificial systems that operate in ways a little like the neurons in a human brain. It is possible to create real neural networks with actual physical electrical devices, however this is not often found in reality. Most neural networks are programs that sit on top of traditional architectures, on ordinary computers.

Neural networks are trainable. Inputs are given to the network, and over time, the network learns to respond well to specific stimuli. It is a pattern recogniser, perfectly adjusted to recognising complex inputs, such as images (eg; faces, signs, music etc..)

This process is very like a human brain, and the operations of neurons within it. Below is an explanation of neural networks and training algorithms.

Why are Neural Networks so important now?

Neural networks have been around in some form since the late 1960s. However, in the past, there were a number of critical issues that prevented neural networks from taking off.

These can be summarised as follows:

  1. The learning datasets were far too small.

  2. The computers were far too slow.

  3. Many of the neural network initialisation, activation and learning algorithms needed optimisation.

We can now say that data sets have expanded, processor speed has increased, and neural network structures have been improved.

After 2015, Neural Networks (now called Deep Learning) have surpassed all other Artificial Intelligence options.

It is now possible to retrain vehicles very rapidly to new sets of data. Computers can learn to be excellent drivers, and can literally be superhuman – seeing more information and making decisions far faster than a human can. These systems have the potential to allow vehicles to surpass humans in terms of safety.

How do companies approach the problem?

Waymo (Google) self driving vehicles. Note the LIDAR bubble on the car roof

There are two methods for dealing with autonomous drive, and a disagreement about which sensors should be used.

  1. Use a small number of heavily equipped vehicles to map specific areas. Use LIDAR (a laser based technology) and cameras to map areas to the cm level. Target specific regions. Do not use the technology outside of mapped areas. This technology tends to use neural networks mostly just for object identification.

  2. Use customer vehicles to map all regions. Only use cameras, radar and ultrasound to map regions. Implement complex neural networks (and use immense datasets) for both pattern recognition and for wider learning processes, which allows driving in unknown areas.

Tesla uses the second approach. Most other companies use the first.

Here is one article, which outlines the advantages of the first (region-centric) approach – an increased level of reliability and safety, and here is a discussion of the advantages of the Tesla model (which also outlines the arguments for and against LIDAR).

It is presently unknown which method will win the race. The first method applies to limited areas (called ‘geofencing’), but these mapped regions are steadily increasing. And this type of strategy tends to work at a Level 3 level in those locations, whereas Tesla is still limited to Level 2.

On the other hand, as pointed out earlier, vast amounts of data is very important for any neural network based Artificial Intelligence, and in this area, Tesla has a big advantage.

A final interesting point is that when and if Tesla eventually implements self-driving, it should apply across many areas at once, and every car with its current technology suite should be updateable to that level at the same time.

In contrast, the other approach requires people to buy capable vehicles when the autonomous vehicle strategy is finalised, when Level 4 or Level 5 has been completed.

Geofencing : more details

Geofencing involves restricting the area where the vehicles drive. Generally this is done for companies that rely on mapping for their autonomous drive systems, as it is not possible for their systems to drive in areas that have not been mapped previously.

However, these companies are constantly extending their maps to new areas.

Though Tesla does map areas with their customer vehicles (important in order to consider unusual local circumstances), it is also possible for their cars to drive outside mapped regions.

One example is the vehicles that are imported into countries that do not have an official Tesla presence. Interestingly, the Tesla autopilot still functions quite well, despite there being no previous mapping of those regions.

Keep in mind however, that Tesla’s vehicles are only really Level 2 vehicles, whereas some of the other vehicles have reached Level 3 in specific regions.

Mass data collection

With respect to the above levels, the more information is available, the more likely it is that vehicles will be able to anticipate and respond to unexpected driving situations.

Some companies are dealing with this issue by simulating driving and allowing artificial intelligence training to occur using these simulations.

Other companies send designated vehicles out into the world to collect data.

Tesla uses its customer vehicles to collect data, which allows it to collect more than any other company, a little over 1 billion miles by 2018. (Note that this is increasing exponentially as it builds more and more Model 3s and introduces them to customers.)

Issues with autonomous drive

As mentioned above, vehicles with Level 1 to 5 are driven by extensive mapping, and via a variety of Artificial Intelligence subsystems.

At present, no Level 4 or Lev