Field AIÂ is transforming how robots interact with the real world. We are building risk-aware, reliable, and field-ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data-driven approaches or pure transformer-based architectures, and are charting a new course, with already-globally-deployed solutions delivering real-world results and rapidly improving models through real-field applications.
About the Job
You’ll build and own the simulation stack that powers development and testing for legged robots, humanoids, and car‑like platforms. The work spans software‑in‑the‑loop (SIL) and hardware‑in‑the‑loop (HIL) setups, GPU‑accelerated physics and learning loops, synthetic data generation, and rigorous Monte Carlo evaluation—with clear, visual reporting. You’ll also scale sim farms and data pipelines in AWS and work hands‑on in Gazebo/Ignition and NVIDIA Isaac Sim .
What You’ll Get To Do
• Own SIL/HIL simulation infrastructure
• Design, implement, and maintain SIL/HIL rigs, including real‑time loops, I/O, and fault‑injection.
• Integrate simulators into CI/CD for repeatable, automated regression testing.
• Develop sensor/actuator interfaces and bring‑up procedures for lab and field use.
• Model robot and vehicle dynamics
• Build and validate dynamics models for legged systems, humanoids, and car‑like platforms.
• Implement contact/friction models, parameter identification, and sensor/terrain effects.
• Create configurable scenarios, environments, and disturbances for coverage testing.
• Accelerate workloads on GPU
• Parallelize simulation, perception, and policy evaluation using CUDA and PyTorch .
• Profile and optimize kernels, memory movement, and mixed CPU/GPU pipelines.
• Scale Monte Carlo campaigns across local GPUs and AWS (e.g., EC2/EKS/Batch) for fast iteration.
• Generate data and close the ML loop
• Produce synthetic datasets with domain randomization and high‑fidelity annotations.
• Connect simulation output to training pipelines; track dataset versions and metrics.
• Manage datasets in AWS S3 and wire up distributed processing for large data volumes.
• Plan experiments and report results
• Design statistically sound Monte Carlo studies and acceptance tests.
• Visualize performance with plotting libraries (e.g., Bokeh /Matplotlib); publish dashboards.
• Summarize findings in clear engineering reports and reviews.
• Integrate with robotics software
• Build ROS/ROS 2 nodes, messages, and tooling; simulate sensors and networks.
• Develop and test in Gazebo/Ignition and NVIDIA Isaac Sim (plugins, sensor models, physics configs).
• Create reproducible dev environments (Docker, CMake) and enforce code health (linting, tests).
The Extras That Set You Apart
• Deep expertise with simulator internals and advanced features (Isaac Sim/Omniverse USD, Gazebo/Ignition plugins), plus MuJoCo or Chrono.
• Prior work on legged/humanoid control or car‑like dynamics (trajectory planning, MPC, tire/ground models).
• Sensor simulation depth (cameras, LiDAR, IMU) with realistic noise and distortion models.
• Distributed compute for large‑scale simulation (multi‑GPU, Ray/Kubernetes) and AWS infra‑as‑code (Terraform/CloudFormation).
• Safety‑critical mindset and familiarity with robotics/automotive standards
• Publications, patents, or significant open‑source contributions in simulation, control, or robotics tooling.
• Experience bringing up real robots and closing the sim‑to‑real gap through calibration and validation.