Software is often a differentiating factor between one vehicle and the next in terms of capability, performance, and self-driving experience. The software is critical to the vehicle's ability to safely operate advanced driver assistance systems and autonomous driving capabilities. Autonomous vehicle software also makes use of artificial intelligence (AI) to understand the surrounding environment, recognize objects, and classify those objects. The AI also works to predict what will happen next and pass that information on to a decision model to determine which course of action to take.
Autonomous vehicle software has to go through a similar compute model that humans go through, also known as a "see-think-do" approach. For humans, this occurs almost without thought. This model includes perception (seeing or sensing something), followed by evaluating available options and weighing the potential outcomes. Finally, a decision is made, and a course of action is followed. For a compute engine in vehicles, this process uses sensors in the car, including cameras, lidar, and radar, to predict movement paths and evaluate options before issuing an instruction on the course and any potential corrections.
For an autonomous car to work, it requires sensors, actuators, complex algorithms, machine learning systems, and processors to execute software. The cars then use this to create and maintain a map of their surroundings based on a variety of sensors located around the vehicle. These can include radar sensors to monitor the position of other vehicles; video cameras to detect traffic lights, read road signs, track other vehicles, and track pedestrians; lidar for measuring distances, detecting road edges, and identifying lane marking; and ultrasonic sensors in the wheels for detecting curbs or other vehicles when parking. The onboard software then processes this sensory input, plots paths, and controls the acceleration, braking, and steering. Hard-coded rules, obstacle avoidance algorithms, and object recognition help the car follow traffic rules and navigate the road.
As the challenges in developing autonomous cars have mounted and the computation requirements have increased, more technology companies and automotive OEMs have begun working together to develop software "ecosystems" to support the development of autonomous vehicles, create some industry standardization, and work toward making autonomous vehicles safe. This has included the development of industry standards for those platforms, allowing for interoperability in the software and enabling automakers to integrate various pieces of software as necessary based on the needs and expectations for the vehicle in question. And this allows software platforms to integrate machine learning, self-healing maps, artificial intelligence, V2X connectivity, and computer vision capabilities into a platform.
As part of the development of software platforms or "ecosystems" for the development of autonomous vehicles, operating system (OS) platforms have been developed for autonomous vehicles. An OS offers a platform on which autonomous services can be integrated, with the OS controlling the car's core capabilities and working to keep passengers and the driving environment safe. For the OS to run, they rely on electronic control units (ECUs), which act as distributed brains for the OS and the autonomous vehicles. ECUs are similar to minicomputers, varying in size, purpose, and OS. They can control various vehicle applications, such as steering, navigation, tracking, engine control, steering stability, and active suspension. Some OS for autonomous vehicles include the following:
- QNX Neutrino
- WindRiver VxWorks
- Green Hills INTEGRITY
- NVIDIA DRIVE OS
- Mentor Nucleus OS
- Linux
- Android Automotive OS
- Apple CarPlay
- Robotic Operating System (ROS)
- Microsoft
As noted above, the amount of computation necessary for an autonomous car can be quite demanding. To help with this compute load and to increase the safety of autonomous vehicles, connectivity has been suggested as a solution. In this schema, each car becomes an edge-compute platform connected to other cars, similar to an IoT environment, capable of communicating with each other for relative driving conditions, road issues, and other data to help the cars work together. In some future-focused looks, this could include removing lights from streets as the cars would be able to communicate with each other regarding which car is going when and where, and to monitor for pedestrians. This connectivity goal has been furthered by the onset and promise of 5G networks and data optimization developments.
Connecting autonomous vehicles can also reduce the compute needs of any single car, allowing autonomous vehicles to increase their compute and software performance while offering lower development costs. What can help with connectivity and data transfer speeds is developing frameworks that can be used by various developers and ecosystem architectures to make interoperability between vehicles faster and more futureproof to minimize overall system complexity and cost.
Neural networks offer vehicle software platforms the ability for the detection, recognition, and classification of objects. Further, with the integration of computer vision algorithms, they can monitor the white lines of the road effectively. The neural networks used in this case are trained using thousands of driving hours and millions of miles of real and simulated roads for autonomous vehicle software. Simulations used are similar to video games, allowing the models to encounter everyday events and unusual occurrences and to better prepare the models for real-world driving.
Similarly, convolutional neural networks (CNNs) can detect, classify, and segment—or separate pavement from road. Or the vehicle platforms can use recurrent neural networks, which are temporally based and tend to include many types of networks that involve loops. Either way, using a neural network of any type requires the compute hardware necessary to run the neural networks inferencing, or "computing" what is seen, in a fast and low latency compute environment for the vehicle to be able to work in the real world and in real time.
Autonomous vehicle perception sensing collects all data from a vehicle's sensors and processes this data into an understanding of the world around the vehicle, similar to the use of sight for a human driver to perceive and understand their driving position. To develop perception sensing, an autonomous vehicle requires vision, radar, and lidar sensing modalities, each of which brings its own strengths and weaknesses, but in an overlapping system that uses all of them and feeds data from each sensor into a perception system that can use the different data points to develop a complete picture.
Another important part of perception sensing can be detecting traffic and predicting behavior of traffic in inclement weather. The sensor arrays and computing in perception sensing can allow the vehicle to detect, track, and predict the movements of objects regardless of weather. This can be especially important in near-zero visibility conditions during winter storms, rain storms, and in fog banks, where radar and lidar can detect objects that optical systems cannot necessarily detect.
A portion of any autonomous vehicle software requires navigation. There are various navigation systems that can be integrated into the software, including using GPS and related information from satellites, such as traffic systems, which can create a common information field that cars can be an integrated part of. Further, it can be a part of the connectivity between other vehicles, collecting data from all vehicles to optimize routes and anticipate driver needs, keeping track of a weather forecast and road reports to ensure the drive can be as safe as possible. Connecting cars can further help cars avoid accidents while being aware of traffic situations. In a navigation stack, the hardware required to connect a vehicle includes a GPS receiver, an inertial measurement unit (IMU), a compass, and a data processing computer, to properly connect and integrate the navigation data into the larger autonomous vehicle software.
As part of autonomous navigation, especially in cross-country environments rather than urban environments, the vehicle software requires the components and computing for obstacle detection and terrain classification. This includes using geometric descriptions of the scene, and a terrain typing component of the perceptual system. Detecting obstacles and the classes of the terrain allows an autonomous vehicle to plan its path and choose the most efficient route toward a desired goal. To develop this capability, the autonomous vehicle software requires new sensor processing algorithms developed for cross-country navigation and sensor systems, such as a color stereo camera and a single axis ladar. Using both systems can increase the potential for obstacle detection, while using the single axis ladar with an appropriate algorithm can be used to discriminate between types of terrain.
Vehicle motion control refers to technologies capable of influencing the longitudinal, lateral, and vertical dynamics of a vehicle. This can include steering, brakes, dampers, and electronic control units, and software is being increasingly integrated into this. Motion control is a necessary part of the autonomous vehicle technology stack, as it allows the automated and autonomous software to control different parts of the vehicle, with intelligent networking across a vehicle allowing the vehicle to achieve better driving dynamics, driver safety, and comfort. The software models used with motion control systems can coordinate systems to guarantee tracking performance and characteristics to ensure the autonomous vehicle performs in a prescribed performance characteristic model.
Motion control is an important component in the "think" and "do" steps of a "see-think-do" model that autonomous software works on. This can maintain a vehicle on a specific path, or it can adapt the vehicle to specific driving conditions or unforeseen road conditions. These systems and their prescribed performance characteristics are further being developed to ensure driver and passenger comfort. This means the model not only has to avoid collisions but also needs to do so in a way that provides the driver and passenger with a sense of security and comfort. This also includes developing the estimated tolerance of safety and caution for an autonomous vehicle in a multi-lane highway scenario, where an overly cautious autonomous vehicle driving slower (or to the limit) compared to other vehicles on the roadway can further cause accidents.
Another important part of autonomous vehicle systems is the ability of the car to monitor the state of the various parts. Vehicle software can suggest changing oil or filters, but autonomous vehicle software could monitor the driver's body and their state, such as blink rate sensors to ensure the driver does not fall asleep. This can extend to anti-theft systems, which can include authorization processes to make theft more difficult, requiring a key and even a smartphone to ensure the person entering the vehicle is the rightful person. Some systems have gone as far as including retina scanning or fingerprint sensors to identify drivers, and different profiles can be loaded into the car for allowed drivers.
As noted above, simulations are used to support autonomous vehicles and autonomous robotic platforms, which offer a simulacrum to the autonomous vehicle platform but in a lower-stakes environment. Simulating autonomous vehicle software can be done in a physical environment, such as using robotic platforms to simulate how the software will react to different conditions. This can help train the software to better respond to conditions in real-use cases.
Simulators can also be completely software-based, using a virtual reality or video game-like environment and engine to test the autonomous vehicle against various traffic, pedestrian, weather, parking lot, and obstacle conditions, among others. This can help train the autonomous vehicle software without requiring a physical platform. It can help the vehicle software understand the parameters of a given vehicle platform and better understand the vehicle in the context of its environment.