What is AWS DeepRacer? A Complete Guide
May 14, 2026 5 Min Read 32 Views
(Last Updated)
Machine learning can feel abstract until you watch a miniature race car navigate a track entirely on its own. That’s exactly what AWS DeepRacer makes possible. Launched by Amazon Web Services at re: Invent 2018, AWS DeepRacer is a hands-on platform that uses a physical 1/18th-scale autonomous car and a cloud-based 3D racing simulator to teach reinforcement learning (RL) to developers of all skill levels. Whether you’re an absolute beginner or an experienced data scientist, AWS DeepRacer gives you a fun, practical way to experiment with one of AI’s most powerful learning techniques and see the results in real time.
Table of contents
- TL;DR
- What is AWS DeepRacer?
- How Does AWS DeepRacer Work?
- The Training Loop
- Key Components of AWS DeepRacer
- The Reward Function
- The Action Space
- Hyperparameters
- The 3D Racing Simulator
- Physical Car vs. Simulator: Which Should You Use?
- What is the AWS DeepRacer League?
- How to Get Started with AWS DeepRacer
- Real-World Applications and Career Impact
- Conclusion
- FAQs
- What is AWS DeepRacer used for?
- Do I need prior machine learning experience to use AWS DeepRacer?
- Is AWS DeepRacer free?
- What programming language is used in AWS DeepRacer?
TL;DR
- AWS DeepRacer is a 1/18th-scale autonomous race car and cloud simulator built by Amazon Web Services for learning reinforcement learning (RL).
- It uses a reward function and a deep neural network to train the car to navigate race tracks without human input.
- Training happens in the AWS console using Amazon SageMaker, and models can be deployed to the physical car or tested in the 3D simulator.
- The AWS DeepRacer League, the world’s first global autonomous racing league, saw over 560,000 builders from 150+ countries participate over 6 years.
- It is beginner-friendly: no prior ML experience is needed to build and race your first model.
What is AWS DeepRacer?
AWS DeepRacer is a reinforcement learning-enabled autonomous 1/18th-scale vehicle with supporting cloud services in the AWS Machine Learning ecosystem. It was introduced at the AWS re: Invent conference in 2018 by then-CEO Andy Jassy as a way to make machine learning more accessible and enjoyable through hands-on racing.
The idea is straightforward: instead of learning RL through dry theory, you train a model, put it on a virtual or physical track, and watch how your decisions play out at racing speed. Every mistake the car makes is a data point, and every correction teaches the model something new.
| Data Point: AWS DeepRacer attracted a global community of over 560,000 builders from more than 150 countries who participated in the AWS DeepRacer League over six years to learn ML fundamentals hands-on. Source |
AWS DeepRacer sits at the intersection of education and competition. It’s used by individual developers, corporate teams, and academic institutions to build practical ML skills without requiring deep expertise to get started.
How Does AWS DeepRacer Work?
At its core, AWS DeepRacer uses reinforcement learning, a type of machine learning where an agent learns by interacting with an environment and receiving rewards or penalties based on its actions. Think of it like training a dog: good behavior gets a treat, bad behavior doesn’t.
In the context of AWS DeepRacer, the ‘agent’ is the race car, the ‘environment’ is the track, and the ‘reward’ is a score you define through a reward function written in Python. The car’s onboard camera captures images of the track, which the deep neural network processes to decide actions, like how much to steer or how fast to go.
The Training Loop
Training happens in three stages that run in a continuous cycle:
- Simulation: The car drives in a 3D virtual environment powered by AWS RoboMaker. It makes decisions based on the current neural network policy.
- Evaluation: Amazon SageMaker measures the car’s performance against the reward function. Good actions increase the reward; bad ones reduce it.
- Optimization: The model updates its policy based on what it learned, then the cycle repeats, usually for tens of thousands of iterations per training session.
AWS DeepRacer uses the Proximal Policy Optimization (PPO) algorithm by default, one of the most popular deep RL algorithms, also used in advanced AI systems like OpenAI’s research models. You don’t need to code it from scratch; the AWS console handles it for you.
Key Components of AWS DeepRacer
Understanding what makes up the AWS DeepRacer ecosystem helps you get the most out of training your model. There are four main components you’ll interact with.
1. The Reward Function
This is the most important thing you control. The reward function is a Python script that tells the model what ‘good driving’ looks like. You can reward the car for staying on the track centerline, maintaining speed, taking smooth turns, or completing laps quickly.
A well-designed reward function is the difference between a car that crawls around the track and one that races competitively. Experienced developers often spend hours fine-tuning this single function.
Best Practice
Start with a simple reward function, reward the car just for staying on track. Once it can complete laps reliably, add speed incentives. Complex reward functions introduced too early often confuse the model and slow training.
2. The Action Space
The action space defines all the possible moves the car can make, combinations of steering angle and speed. A broader action space gives the model more flexibility but also makes training harder. Beginners should start with a smaller, well-defined set of actions.
3. Hyperparameters
Hyperparameters control how the model learns things like the learning rate, batch size, and number of training episodes. AWS DeepRacer provides default settings that work reasonably well, but advanced users tune these to speed up training or improve model stability.
4. The 3D Racing Simulator
AWS RoboMaker powers the simulation environment, which replicates physical tracks with realistic physics. You can train and evaluate your model entirely in the simulator before deploying it to a physical car. This is particularly useful for rapid iteration, making a change, testing it in minutes, and trying again.
Physical Car vs. Simulator: Which Should You Use?
AWS DeepRacer gives you two ways to race: using the physical 1/18th-scale car on a real track, or entirely in the cloud-based simulator. Here’s how they compare:
| Feature | AWS DeepRacer (Physical) | AWS DeepRacer (Simulator) |
| Hardware Required | Yes, 1/18th scale car | No, fully cloud-based |
| Cost | Car purchase + AWS compute | AWS compute only (free tier available) |
| Learning Curve | Moderate, needs physical setup | Beginner-friendly, instant start |
| Best For | Live events, corporate training | Solo learning, rapid iteration |
| Competition | Physical racing leagues | Virtual leaderboard races |
For most beginners, starting in the simulator is the smartest move. It’s free to try, requires no hardware, and lets you iterate quickly. The physical car becomes valuable when you want to participate in live events, corporate training days, or simply enjoy watching your model race in the real world.
Warning
Training in the simulator does not guarantee real-world performance. Physical tracks have lighting variations, surface inconsistencies, and environmental noise that the simulator doesn’t replicate perfectly. Always test your model on the physical track before a live competition.
What is the AWS DeepRacer League?
The AWS DeepRacer League was the world’s first global autonomous racing league powered by machine learning, and it ran for six years from 2018 to 2024. It was open to anyone: students, engineers, researchers, and curious learners from every industry.
Participants trained their models and submitted them to virtual leaderboards. The fastest lap times qualified for regional championships, with finalists competing in person at AWS re: Invent in Las Vegas for the Championship Cup and prize money.
Data Point
Over 560,000 builders from more than 150 countries participated in the AWS DeepRacer League over six years, making it one of the largest community-driven ML learning programs globally.
Source
The 2024 Championship at re: Invent was the final League event. However, AWS DeepRacer is far from over. Starting in 2025, AWS released the DeepRacer source code as an open AWS Solution, allowing organizations worldwide to launch their own private leagues and training programs at a fraction of the original cost.
Corporate participants like Vodafone and Eviden used AWS DeepRacer to train thousands of employees in ML fundamentals through internal league competitions, turning an educational tool into a company-wide upskilling program.
How to Get Started with AWS DeepRacer
Getting your first model on the track takes less time than you might think. Here’s the step-by-step path:
- Create an AWS Account — Sign up at aws.amazon.com. New accounts get free tier access to try training for limited hours without cost.
- Open the AWS DeepRacer Console — Navigate to the DeepRacer section in the AWS console. The interface guides you through creating your first model step by step.
- Choose a Race Track — Select from multiple available tracks. Beginners should start with a simple oval, like the re: Invent 2018 track.
- Write Your Reward Function — Use the built-in Python editor or start with a sample function. The console shows reward function parameters to help you customize it.
- Train and Evaluate — Start training. Monitor the reward graph in real time. Once the car can complete laps consistently, evaluate it on the track.
- Iterate and Improve — Clone your model, adjust the reward function or hyperparameters, and retrain. Each cycle improves lap times.
Also Read about the AWS Roadmap
Real-World Applications and Career Impact
AWS DeepRacer is not just a toy for developers. The skills you build reward function design, hyperparameter tuning, and model evaluation- map directly to real-world reinforcement learning applications in robotics, autonomous vehicles, logistics optimization, and game AI.
Many participants have used their AWS DeepRacer experience to land new roles in ML engineering and data science. The platform also serves as an accessible entry point for professionals transitioning into AI from other fields.
| Data Point: AWS DeepRacer continues to be used by tens of thousands of builders within organizations annually, with new 2025 workshops bridging DeepRacer fundamentals with generative AI techniques using Amazon SageMaker and Amazon Bedrock. |
Beyond individual career growth, companies like Vodafone have used AWS DeepRacer to onboard thousands of engineers to ML concepts as part of broader digital transformation goals. The platform scales from a single curious developer to a company-wide training initiative.
Conclusion
AWS DeepRacer has done something remarkable: it made reinforcement learning tangible. Instead of abstract equations and theory, you get a race car, a track, and immediate feedback. That’s a powerful combination for anyone trying to break into machine learning or level up their existing skills.
Whether you’re a developer exploring RL for the first time, a student building your portfolio, or a team lead looking to upskill your engineers, AWS DeepRacer gives you a structured, competitive, and genuinely fun path into one of the most in-demand areas of AI. The community is global, the platform is accessible, and the learning is real. Now is the right time to get on the track.
FAQs
What is AWS DeepRacer used for?
AWS DeepRacer is used to learn reinforcement learning (RL) through hands-on experimentation with an autonomous 1/18th-scale race car and a cloud-based 3D simulator. Developers use it to understand core ML concepts, reward functions, hyperparameter tuning, and model evaluation in a practical, competitive environment. Companies also use it for employee ML upskilling programs.
Do I need prior machine learning experience to use AWS DeepRacer?
No. AWS DeepRacer is designed to be beginner-friendly. The console guides you through creating a model, and sample reward functions let you get started without writing code from scratch. You can learn RL fundamentals as you go. Advanced users can dive deeper into custom reward functions, neural network architectures, and SageMaker notebooks for more control.
Is AWS DeepRacer free?
You can start training for free using the AWS free tier, which provides limited compute hours for new accounts. Beyond the free tier, training and simulation time incur charges based on Amazon SageMaker and AWS RoboMaker usage. The physical car is a separate one-time hardware purchase.
What programming language is used in AWS DeepRacer?
The reward function in AWS DeepRacer is written in Python. No other programming knowledge is strictly required for the basic console experience. However, advanced users who want to build custom training pipelines using SageMaker notebooks will benefit from familiarity with Python and basic ML libraries.



Did you enjoy this article?