🎇 On National Day, I went for a leisurely drive in San Francisco and ended up "stress-testing" a Waymo self-driving car on the road. 🚗 As an autonomous driving practitioner, who wouldn't be curious about the real-time robustness of the cutting-edge Waymo One driving system? While cruising downtown, I accidentally noticed a Waymo car tailgating me. While this isn't unusual for SF citizens, a wild, "evil" idea suddenly hit me: Why not directly "adversarially attack" the world's autonomous driving status quo? I executed an unexpected maneuver—suddenly reversing—to see how it would react. 🌟 The response was stellar! At the moment I reversed, Waymo One honked instantly—quicker than any human could—activated its hazard lights, and backed away to maintain a safe distance. This reckless move on my part served as an edge case to test the algorithm's robustness under extreme conditions and, potentially, could be a challenging training sample to enhance Waymo's future autonomous systems. 💫 Thrilled to personally experience how current cutting-edge autonomous algorithms handle rare driving behaviors—and how stable and safe Level 4 autonomy is in dealing with diverse scenarios. However, it also prompted deep reflection as an AI researcher in this field: 🤔 In an industry with little room for error, how can we ultimately avoid or minimize issues that AI fails to handle? 💡 I believe two research directions are particularly promising for achieving Level 5 level autonomy in future mobility systems: - 1️⃣ Development and deployment of vehicle-to-everything (V2X) cooperative systems (including V2V, V2I, V2P, etc). Our initial studies (e.g., V2X-ViT, ECCV'2022 arxiv.org/abs/2203.10638) show that in scenarios with severe occlusions or noise, such cooperative systems can significantly enhance the robustness of perception systems, thereby eventually improving traffic safety. - 2️⃣ Adversarial scenarios generation (including Sim-to-Real, generative modeling). Research done by my colleagues at UCLA (V2XP-ASG, ICRA'2023 https://lnkd.in/gu5nKVHD) shows that adversarial learning techniques can effectively simulate adversarial scenarios, greatly improving model robustness in complex "corner case" situations. Of course, it's often infeasible to collect such collision scenarios. 👨🏫 As a new Assistant Professor of CS at Texas A&M University, I will lead a group focusing on these exciting research directions, which can be a proactive approach to reducing accidents and improving safety for all. 🔥 I look forward to future collaborations with governments, academics, and companies to research and develop data and algorithms that can help enhance the safety of vulnerable road users, especially seniors and children. We envision a people-centered intelligent transportation system in the future. Interested in these topics? Let's connect and discuss further! #AutonomousVehicles #AI #MachineLearning #SmartCities #Transporation #Humanity #Mobility
AI Challenges In Urban Autonomous Vehicle Deployment
Explore top LinkedIn content from expert professionals.
Summary
Deploying autonomous vehicles in urban areas presents significant challenges for AI, including responding to unpredictable human behavior, handling complex environments, and addressing gaps in training data. These obstacles must be resolved to improve the safety and reliability of self-driving systems.
- Focus on collaboration: Developing vehicle-to-everything (V2X) systems that improve communication between vehicles, infrastructure, and pedestrians can help autonomous cars make safer decisions in crowded urban contexts.
- Account for edge cases: Use adversarial learning and simulation techniques to train AI systems on rare or unexpected driving scenarios that are difficult to replicate in real life.
- Adapt road infrastructure: Explore urban design changes, such as better traffic signage and designated pedestrian zones, to provide a safer and more predictable environment for autonomous vehicles to operate.
-
-
In a recent post on X, Malcolm Gladwell raised an incredibly important—yet often under-discussed—limitation of driverless cars operating in urban environments. Without changes in how humans and robots interact on the road, pedestrians, human drivers, and other road users may increasingly exploit the conservative, safety-first behavior of autonomous vehicles. Picture a busy, unsignalized crosswalk on a crowded city street. Human drivers often inch forward, subtly asserting themselves to create space—imperfect, but effective. A driverless vehicle, in contrast, may be programmed to wait patiently, prioritizing safety above even the slightest risk. Sure, we could get used to longer travel times, redesign the infrastructure (e.g., add more traffic signals or physical barriers), but those changes will take years—and require public investment that has yet to materialize or even be adequately discussed. As my 2018 TEDx highlights, "There's more to the safety of driverless cars than artificial intelligence." This isn’t just a technological issue—it’s a human one. See comments for links.
-
🚗 🚗 🚗 Data Blind Spots and Their Impact in Automotives Visual AI 🚗 🚗 🚗 Excited to share my latest article in WardsAuto on the critical challenge of achieving 99.999% accuracy in automotive visual AI (link below). Drawing from both my academic research at University of Michigan College of Engineering and industry experience with Voxel51, I explore why data blind spots are the main obstacle holding back autonomous vehicle development. Data blind spots --- critical gaps in datasets that happen in practice but are out of domain for the model --- prevent AI models from achieving the level of robustness necessary to deliver real systems to production. I bring this to light with a real case study involving phantom potholes and how unified data visibility helped solve it. The key - bringing in teams, data, and models together to rapidly identify and fix issues. Read the full piece to learn how we can cut the 80% AI project failure rate and make road transportation safer: https://lnkd.in/df8Gf-xd