The auto insurance problem caused by autonomous driving
Time of Release:
2022-12-13
Recently, the news about driverless and autonomous driving has been exciting. The news about the release of self-driving cars by major automakers and technology giants has been overwhelming. Internet giants such as Uber, LeEco, Baidu and Google have also announced the development of self-driving cars. Technology developments, acquisitions, production plans, and various upgrades to self-driving cars make us wonder: the era of self-driving cars is really coming, and we probably won't think we've seen a car passing by on the street without a driver.
Recently, the news about driverless and autonomous driving has been exciting. The news about the release of self-driving cars by major automakers and technology giants has been overwhelming. Internet giants such as Uber, LeEco, Baidu and Google have also announced the development of self-driving cars. Technology developments, acquisitions, production plans, and various upgrades to self-driving cars make us wonder: the era of self-driving cars is really coming, and we probably won't think we've seen a car passing by on the street without a driver.
But in reality, in addition to not addressing the technical issues that caused the crashes, the current insurance claims and ethical lapses could be fatal to the process of bringing autonomous vehicles to market. Ernie Bray, chief executive of ACD, an American provider of insurtech solutions, makes this point thought-provoking.
Ever since news broke of a fatal accident involving a "self-driving" Tesla Model S, and since more than half of the automakers in the market are struggling to develop the technology, media outlets and experts of all kinds have engaged in Q&A sessions to discuss the pros and cons of self-driving cars.
Most of the questions so far have focused on three key areas
Liability: If the robot in charge causes an accident, who should be held responsible and pay compensation?
Technical question: Is it time to bring self-driving technology to market?
Ethical issues: How will the AI controlling the vehicle make key decisions, and who is responsible for programming the AI?
Although the debate has only just begun, there is already a tentative consensus. But as new data are generated, new questions are raised, and new solutions emerge, we need to revisit the old "conventional wisdom."
Aspect of responsibility
It is widely believed that the manufacturers of autonomous vehicles should be held liable for accidents caused by defects or malfunctions of "robot drivers". In fact, Volvo pre-emptively announced last year that it would pay for all personal injury and property damage caused by its completely indigenous IntelliSafe autonomous driverless system. What did the company speculate and imply? The system would require very little or no human intervention, so human passengers would not be held liable for accidents.
Determining Liability: Easier said than done?
The conclusion above may sound simple, but is the reality different from the theory?
What if the autopilot switches to manual mode for a split second? Who will determine the authenticity of the accident and when it happened? Who will own the data in the vehicle - the "black box "? Does the owner have a right to the data and can the owner legally withhold the data and interfere with the accident liability investigation?
What level of expertise does the liability examiner need to have? Do claims investigators need to be computer experts to analyze the data and recover the accident?
Currently, insurance coverage for cars using autonomous driverless and driver assistance technology is the same as for conventional vehicles. Insurance premiums are likely to rise in the short term after this accident. Why? The spread of "semi-automatic" systems may raise the possibility of human error, leading to more accidents.
Previous