Deepdrive logo

General FAQ

  • What makes Deepdrive different than simulator / self-driving car company X?
    Current simulators seem to tie themselves to specific hardware or do not have a path towards deploying AI into physical vehicles. We want to be hardware agnostic and deploy into real vehicles.
     
    To do this, we will train AIs on a wide range of sensors, cars, and environments (which is easy in simulation) and facilitate transfer into real cars via techniques like domain adaptation. This will allow a larger number of people to iterate on production AI, using just the simulator, while providing constant on-road testing of the best in-sim AIs in physical vehicles.
     
     
  • How close are autonomous vehicles to human-level driving?
    Autonomous vehicles still underperform humans by a wide margin at one human intervention per 5k miles - a figure which improved by just 10% from 2016 to 2017. While ready-at-the-wheel drivers and remote operators make self-driving a viable technology today, we believe that the right type of simulator can drastically accelerate progress by giving more people the opportunity to contribute, allowing them to do so in software-only, and by creating more fluid competition between approaches.
     
  • What happened to the GTA-based version of Deepdrive?
    Unfortunately, we were unable to license GTA V in a way that allowed us to distribute our previous versions, including Deepdrive 1.0 and Deepdrive Universe. We've now rebuilt everything on Unreal Engine, and with its full access to the engine source code, a great graphical editor, and the general ability to modify any part of the simulation, are elated about what we can bring to you in terms of capability, hackability, and transparency.
  • How useful is the current simulator in making progress on self-driving?
    We think that something akin to a MNIST for self-driving is extremely important for beginners to get up and running easily and also for allowing researchers to quickly evaluate new ideas. Specifically, we see great promise in the ability to try methods like reinforcement learning in our initial Deepdrive 2.0 release, something that would be too dangerous to experiment with in the real world. We also know that matching the richness and variability of real-world driving within the simulation will be one of the keys to our success, and we will work aggressively towards creating the most true-to-life driving simulation possible.
     
  • Do you plan to deploy AI into real cars?
    Definitely! We plan to test in physical cars early and often by integrating with systems like Comma.ai's or Polysync's DriveKit. If you're interested in partnering with us on a hardware integration, please reach out at [email protected].
     

Technical FAQ

  • Will you support Macs?
    Mac support requires one of two things, a shared memory implementation or sending capture data over sockets. The latter would be slower but would enable better distributed processing on all platforms. Apple is incubating external GPU support which will bring MacOS on par with Linux and Windows in terms of single-machine deep learning setups. The simulation itself already runs on Mac, so it’s just a matter of sending and processing the data from the environment with sharedmem/sockets and GPU accelerated ML respectively. Slower-than-realtime support will be possible when we add synchronous stepping (see next FAQ).
     
  • Is the API synchronous or asynchronous?
    The agent and environment currently operate asynchronously from each other except for resets and registering cameras. So if the environment's frame rate drops below the agent's step rate, the agent will receive blank frames from the environment, and if the opposite happens, frames will be skipped. Existing RL baselines like A2C use synchronized stepping across multiple environments through vectorized environments which we will support in v3. Also, thanks to the Carla.org team for their technical guidance on ways to do synchronized stepping within Unreal.
     
  • Why didn't you use sockets to transmit sensors?
    Sending sensor data via sockets was a bottleneck in our testing. Getting 8 HD cameras and depth at 60FPS, uncompressed back to the CPU gets too close to the limits of what, even highly tuned, sockets can do. We are still working on other bottlenecks (like sequential rendering of cameras), but it’s important, and possible to simulate a modern self-driving stack in real-time on one machine - and we will stay committed to doing that so developers with one machine can test their agents.
     
  • What is Deepdrive's sensor latency and throughput?
    On a GTX 980, we currently see 20 fps with eight 512x512 60°FOV uncompressed 48bit color, 16bit depth cameras, 50 fps with one 512x512 camera, and 10 fps with six 1920x1200 cameras. This includes copying and marshalling data into NumPy arrays.
     
  • How was the baseline agent trained?
    The baseline agent was trained on 8.2 hours of driving data using a variant of DAgger where the human labels are replaced by an oracle path follower (defined in Car.cpp). See tensorflow_agent/agent.py for exactly how we collect data. Starting from BVLC alexnet pretrained weights, we fine tuned for nearly 8 hours on a GTX 980. You can view our complete Tensorboard events here. N.B. we needed to mix Linux and Windows collection in order to have the agent perform well on both platforms (we used 80/20% Windows/Linux). Similarly, we mixed data rendered in the Unreal Editor with data rendered in the packaged version of the sim to get equal performance in both places. We think this is due to rendering differences between OpenGL, DirectX, and rendering optimizations made during packaging which points to the importance of using capture augmentation in addition to using domain adaptation techniques to increase transferability.
     
 
 
 
Made with on our pale blue dot