Autonomous Vehicle Safety Issues

Explore top LinkedIn content from expert professionals.

Summary

Autonomous vehicle safety issues refer to the challenges and concerns surrounding the ability of self-driving cars to operate safely on public roads. These issues include ensuring reliable technology, minimizing human error in hybrid systems involving remote operators, and establishing effective regulatory frameworks to protect public safety.

  • Adopt advanced safety assessments: Leverage proven methods like surrogate safety measures from the insurance industry to assess potential risks and compare autonomous vehicle performance to human drivers in similar scenarios.
  • Improve remote operator standards: Implement stricter requirements for remote operators, including enhanced training and better operational processes to reduce errors and ensure safer vehicle operation.
  • Demand regulatory oversight: Advocate for clear, enforceable safety protocols and comprehensive testing standards from manufacturers and governing bodies to safeguard public roadways from inadequately tested autonomous technologies.
Summarized by AI based on LinkedIn member posts
  • View profile for Philip Koopman

    Embedded Systems & Embodied AI Safety. Helping teams take the next step for software quality and safety. (Emeritus)

    32,618 followers

    A mishap started with a Waymo robotaxi entering an intersection against a red light. This was apparently in response to an incorrect remote operator command. That situation was associated with a moped loss of control and driver fall in the green light direction. (I think it is reasonable to assume the moped operator fell on the wet road due to reacting to the Waymo robotaxi entering the intersection against the red.) While some might simply pin this on remote operator "human error" that misses the bigger picture. We've been told by Waymo (and Cruise) that the vehicle is responsible for safety and the remote operator just provides advice. But here is a mishap caused by a remote operator failure. This is not testing -- this is deployment. So what matters is not human vs. robot error. What matters is the net safety of the combined system. If you toss humans into remote operator roles with limited situational awareness and quick response requirements, you can expect mistakes. Like this one. This could have been a lot worse. I hope Waymo does more than lecture their operators about paying attention, because that never really works to compensate for inadequate operational processes, operator interfaces, communication issues, or whatever else might have contributed here. For regulators: remote operators are going to be a thing for a LONG time. California should require they be in the same state, have driver licenses, a clean record, etc. for operations even if they are not in the vehicle itself, because clearly they make driving decisions that affect safety critical vehicle operations. There is no California DMV report for this crash that I could find. Waymo is not require to report deployment crashes to CA DMV, and that urgently needs to change. The information available right now from SGO 30270-6981: "On January [XXX], 2023 at 10:52AM PT a rider of a moped lost control of the moped they were operating and fell and slid in front of a Waymo Autonomous Vehicle ("Waymo AV") operating in San Francisco, California on [XXX] at [XXX] neither the moped nor its driver made contact with the Waymo AV. The Waymo AV was stopped on northbound [XXX] at the intersection with [XXX] when it started to proceed forward while facing a red traffic light. As the Waymo AV entered the intersection, it detected a moped traveling on eastbound [XXX]. The Waymo AV braked and came to a stop as the moped approached the intersection. The rider of the moped braked then fell on the wet roadway and slid to a stop in front of the stationary Waymo AV. There was no contact between the moped or its rider and the Waymo AV. The Waymo AV's Level 4 ADS was engaged in autonomous mode.  Waymo is reporting this crash under Request No. 1 of Standing General Order 2021-01 because a passenger of the Waymo AV reported that the moped may have been damaged. Waymo may supplement or correct its reporting with additional information as it may become available." See comments about remote operator

  • View profile for Gervais  T. Mbunkeu, M.Eng, M.Sc, CCSK, CIPM

    Global Data Risk and Privacy Leader @ PwC|🚀Experienced Engineer | Cybersecurity, Privacy & CAV Policy | AI Enthusiast | Building Trust in a Connected World 🌐

    2,138 followers

    🚨🚨🚗🚗🚗🚕🚕🚨🚨🚔 “Show me a technology, and I’ll look for a problem for it to solve.” Mercedes is now being touted as the first automaker to sell a Level 3 Autonomous Driving System in the U.S. without a requirement for drivers to monitor the road. The timing of this news is deeply concerning, especially considering the unresolved safety concerns surrounding autonomous driving technology. Last week, we learned that the driver of a Ford electric SUV involved in a February fatal crash in Texas was using the company’s partially automated driving system before the wreck. Just two weeks ago, Tesla settled with the family of Walter Huang, an Apple engineer who tragically lost his life when his Tesla, operating on autopilot, drove him into a highway crash attenuator at 90 mph. This and other incidents involving companies like Waymo and Cruise highlight a glaring issue: the failure of the National Highway Traffic Safety Administration (NHTSA) to adequately protect public safety combined with the automotive industry’s hubristic indifference to contrary evidence. While Mercedes’s new Drive Pilot system is laden with caveats—activation only under specific conditions such as heavy traffic, daytime use, and restricted to certain freeways in California and Nevada at speeds below 40 mph—it represents a significant shift in how we view driver responsibility and oversight in vehicles equipped with autonomous technology. Worse still, there’s ample evidence to suggest that drivers will abuse these systems, using them in places and conditions outside their designed capabilities. There is also evidence that abuse of these “self-driving” systems is very much predictable, humans are very good at anthropomorphizing computers, ascribing to them capabilities they do not possess. Furthermore, this development underscores the ongoing ‘alignment problem’ in artificial intelligence, where the capabilities of AI technologies may not fully align with societal safety standards and ethical considerations. Our enthusiasm for innovation must not outpace our commitment to safety. We must demand rigorous oversight and transparent safety protocols from both manufacturers and regulatory bodies like the NHTSA before these technologies are allowed on the road. The stakes are simply too high to prioritize technological advancement over human life. Michael DeKort Missy Cummings Philip Koopman Junko Yoshida David Beck David Zipper https://lnkd.in/eeJQbTES #AutomotiveSafety #AutonomousDriving #PublicSafety #NHTSA #DrivePilot #Mercedes #artificialintelligence #selfdriving #selfdrivingcars #autonomousvehicles #machinelearning

  • View profile for Mica Endsley

    President at SA Technologies,Inc

    4,551 followers

    There has been a lot written about the Tesla recall. It’s basically a software update that will be pushed out to vehicle owners that will increase the frequency of alerts if it thinks your hands are not on the wheel when driving with Autopilot. Unfortunately it is unlikely to do much to solve the problems with this software. Tesla already gives frequent alerts when it thinks your hands are off the wheel (often erroneously) starting at about 30 seconds and escalates to lock out fairly quickly. But this doesn’t really solve the driver attention problem. People simply are not good passive monitors of automation in general and never will be, even with these nuisance alarms. This has been documented in hundreds of studies over the past 40 years. Situation awareness decreases and the likelihood of detecting and responding correctly to problems goes way down. All this “fix” will do is make the autopilot even more annoying. Therefore people will either not use it or use methods to fool it (for which there are several techniques out there in use). This is a poor bandaid for a fundamental problem with low reliability automation that requires human vigilance. NHTSA needs to do much more to ensure that vehicles with autonomous software are designed to promote good performance outcomes and tested to make sure it is safe before being used on our nation’s roadways. This will require that Congress passes legislation to address this legal gap. https://lnkd.in/g9cXTCdi

Explore categories