Field AIÂ is transforming how robots interact with the real world. We are building risk-aware, reliable, and field-ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data-driven approaches or pure transformer-based architectures, and are charting a new course, with already-globally-deployed solutions delivering real-world results and rapidly improving models through real-field applications.
At Field AI , we are moving beyond single-agent autonomy— scaling AI coordination across fleets of robots in unstructured, high-risk environments . Our work in Field Foundation Models™ (FFMs) is enabling multi-robot decision-making, strategic coordination, and decentralized intelligence at unprecedented levels. From large-scale robotic deployments in complex environments to real-time tactical decision-making, we are pioneering multi-agent AI that is explainable, risk-aware, and field-ready.
We are seeking a Multi-Robot Intelligence Research Engineer to design and implement scalable algorithms for coordination, decentralized control, and game-theoretic decision-making in multi-robot systems. This role is at the intersection of robotics, AI, and mathematical game theory , pushing the boundaries of large-scale, real-world autonomy .
What You Will Get To Do
• Develop fundamental algorithms for multi-agent coordination (including differentiable game theory, mean-field control, and decentralized optimization ) to enable fleets of autonomous robots to operate in real-world, high-stakes environments.
• Design computationally tractable formulations of multi-agent Nash equilibria, Stackelberg games, and cooperative decision-making strategies , ensuring robust and scalable decision-making across heterogeneous robotic teams.
• Build predictive models for multi-agent interaction dynamics , leveraging graph-based learning and control-theoretic formulations to drive efficient coordination in dynamic, adversarial, and uncertain settings.
• Develop distributed inference and control policies using neural PDEs, mean-field game-theoretic methods, and scalable stochastic optimization for real-time at-scale robotic interaction.
• Bridge theory with deployment —integrate multi-agent planning, auction-based task allocation, and decentralized multiagent reinforcement learning (MARL) into hardware-in-the-loop robotic systems operating at scale .
• Push the limits of explainability in multi-agent AI , ensuring tractability, convergence guarantees, and real-world feasibility while maintaining risk-aware and uncertainty-resolving decision-making .
• Collaborate across teams to transition multi-agent models from high-fidelity simulations to real-world deployments , working alongside robotics engineers, AI/ML researchers, and field roboticists to ensure seamless real-world operation.