The computing power required to make autonomous vehicles viable on public roads is readily available, so why aren't algorithms driving us everywhere? We consider a few of the technicalities around autonomous vehicles.
Big data is controlling our lives. The reality as depicted in the film Minority Report is slowly revealing itself in our everyday lives. Your GPS-enabled smart devices are serving you location marketing and -services in real-time. These alerts are occasionally handy, but mostly annoying and, philosophically... terrifying.
It’s very much the same technology that is driving the artificial intelligence (AI) transformation we’re being promised in cars. Soon, traffic will flow seamlessly, as thousands of cars, with millimetric precision, merge onto highways and lemming into cities from the suburbs each day. And there will be no more collisions either. Bothersome cosmetic damage crashes and awful, possibly life-threatening accidents can and will be avoided by the omnipotence of autonomous driving technology. These are the promises.
The truth is somewhat different, and sobering. Issues facing AI in cars are potentially crippling the cause of the self-driving machine. And definitions, in the tradition of all things futuristic, are opaque.
Artificial Intelligence in cars: Not all that new
The influence of digitisation on the car hasn’t been all Tesla, Google and Apple. AI isn’t new to the automobile and autonomous driving technology, even less so. When that automatic transmission anticipates your shifting pattern demands, due to having harvested kilometres of throttle mapping data as you drive, that’s AI.
When you were cruising down to the coast – as much of South Africa does over the holidays – and engaged cruise control, perhaps even a radar-guided system where you are not required to cover either throttle or brake pedal, that’s autonomous driving. It’s here, it’s happening – it’s hardly new. The issue is that we’ve become so beholden to the promise of technology having an infallible ability to solve problems, that there’s a lack of scepticism regarding the AI and autonomous self-driving project. A car is not a Smartphone. Its potential for causing physical damage and fatalities is greater by a factor of magnitude.
Original testing of autonomous cars started in the '90s with the Mercedes-Benz S-Class.
The first fully autonomous cars, capable of driving vast distances on public roads amidst other traffic, weren’t Silicon Valley start-up prototypes navigating through northern California. They were 3rd-generation S-Class W140s and the work of a gifted German robotics engineer, Ernst Dickmanns. In 1994, an autonomous S-Class successfully drove more than 1 000 km on the hellishly trafficked autoroute 1, which passes Charles de Gaulle airport, outside Paris, without hindrance or endangering others.
A bit more than two decades later, with exponential increases in computer processing power and cameras with far greater sensory capacity than anything available to Dickmanns in 1994, why does the concept of a viable, self-driving car remain on the horizon of delivery?
The problem: everyone else
Tesla has been foremost in claiming that its cars are capable of fully autonomous driving. As a brand founded on principles of disruptive technology and peerless digital innovation, one would expect no less – but there are a great many things that Tesla’s autopilot cannot do.
Foremost amongst these, is operating with absolute safety. The closest engineering analogy to autonomous driving technology is the autopilot function in aviation – which is robustly tested and rarely has to contend with the complexity of multiple collision objects on its exact flight path. Driving on a road, there are collision prospects everywhere – all of the time.
Just as much as Bosch would never release a new, highly sophisticated ABS braking or ESP stability system to the market if a fatal accident had occurred while the system was being tested (during its developmental phase), Tesla’s claims for autopilot are at odds with the evidence. This system has failed – fatally – at least once, and it’s limited to the quality of data gathered by cameras and sensors to achieve safe autonomous steering, throttle and brake inputs. And that data is what its multitude of cameras can see – which are road markings and infrastructure.
In California, home to most of Tesla’s market and the world’s cutting-edge AI engineering, the road infrastructure is excellent. In many other parts of the world, it is not. Volvo’s championing its autonomous driving technology too, with testing being conducted with a fleet of XC90s in Gothenburg – a Nordic city with near perfect roads and obsessively obedient and disciplined drivers, cyclists and pedestrians.
One of the current pioneers of autonomous cars, Tesla has not been without incident.
Faded road markings and undefined shoulders are the undoing of autonomous driving technology. What works in San Francisco and Sweden will not be applicable to Soweto – or even Sandton. Compounding the issue of data sourcing – the huge disparity in road infrastructure and marking quality – is an even greater challenge: engineering into the autonomous algorithm a capability of human anticipation. There is speculation that a level of aggression will have to be integrated into autonomous driving systems to avoid cars remaining static if inconsiderate drivers refuse their merging behaviour on highways, or pedestrians choose to cross as they wish – in droves.
The detection range of autonomous driving system cameras, on Tesla’s Model 3, are at best 250 metres, within quite a narrow field of recording, and for fog and night driving, you’re relying on a radar system that has a maximum object detection range of 160 metres. In view of the exponential braking distances required as speeds increase, a margin measured in a 100 or 200 meters seems uncomfortably slight.
The autonomous ecosystem: an impossible dream
Volvo has promised a self-driving customer car by 2021. Most German premiums models have a credible level of semi-autonomous driving functionality already – and Tesla’s autopilot can do certain things, on a predictably unchallenging highway.
Therefore, a true autonomous driving experience, a car which will navigate city traffic, safely climb, crest and descend a technical mountain pass and drive itself on a gravel road in the Karoo is highly unlikely anytime soon. AI engineers have done a magnificent job of integrating systems and leveraging the latest camera, graphics card and sensor technology to saturate autonomous systems with a stream of decision making data. Their biggest encumbrance is something beyond the control of car companies. Unless there’s a massive infrastructure project to engineer all roads to a standard of autonomous driving function, with high-contrast road marking, the data stream will never be 100% reliable, and therefore, never 100% safe.
Volvo has promised self-driving cars for 2021, even partnering with Uber to speed up development.
The same logic applies to fellow road users: unless everybody has a car with 360-degree camera coverage and radar, the ecosystem remains imperfect and liability for collisions will have to reside with someone – which is the reason Mercedes-Benz never evolved those aforementioned autonomous W140s from 1994 into production. Globally, the insurance industry and local traffic legislators don’t know what to make of AI and fully autonomous driving, because all laws subscribe to some principle of a human driver being in control, and ultimately, responsible.
South African conditions are a headache
South Africa is especially symptomatic of all the issues enveloping and strangling the autonomous driving project. Our roads vary from amazing to abysmal. Average speeds are high and lane discipline low. Unlike many other parts of the world with quality highways enabling sustained cruising speeds, South Africa’s multi-lane roads are often randomly crossed by pedestrians or wildlife. None of those issues exist in California or northern Europe, where the world’s cleverest AI engineers are attempting to perfect a sovereign driving algorithm.
Could a car that will recognise the reduced braking surface friction of black ice in Russia, anticipate wildlife crossing at night in Namibia and safely navigate treacherously loose gravel corners on a Karoo mountain pass ever come to fruition? The onboard hardware unquestionably exists to execute all three, but there are millions of kilometres of real-world testing that will have to be done before algorithms governing the behaviour of autonomous driving sensors work in all conditions with absolute confidence.
The achievable end might be fully autonomous driving validation packages for each country, tailored to specific risk profiles, but that could be a cost too far for both the motoring industry and customers. A double-cab bakkie that's capable of merging with the morning N1/2 traffic, then trailing the car ahead of it all the way until it reaches the off-ramp to your office? That’s a laudable and achievable aim. But self-driving cars capable of driving you everywhere? Not soon. At all.