Case Study
Monday, September 29
03:30 PM - 04:00 PM
Live in Berlin
Less Details
This session explores the use of vision-language models (VLMs) to extract data relevant to operational design domains (ODDs) for urban automated driving. It covers insights on VLM performance against diverse ODD requirements, highlights the value of large-scale camera data from vehicle fleets, and introduces new datasets scheduled for release in 2025. Attendees will gain a comparative view of different VLM approaches and understand the benefits of fine-tuning. The presentation underscores how VLMs can support testing, approval, and operation workflows for autonomous vehicles by aligning data processing with complex, real-world ODD scenarios.
In this presentation, you will learn: