I am a Research Scientist and Tech Lead at Meta Reality Labs Research in Redmond. At Reality Labs, I lead an embodied AI team whose vision is to develop contextual Augmented Reality (AR) systems that can extend their users' capabilities.
We adopt an embodied AI framing for contextual AR assistance. In particular, we model the AR system as an intelligent agent that perceives the user and their environment through various sensors and outputs actions in service of the user's goals, aligned with the user's preferences.
My current research focuses on leveraging representation learning, planning, and reinforcement learning techniques — combined with an embodied AI framing — for developing AR assistance models. Our embodied AI framing allows us to break down the problem of computing AR/VR assistance into two main parts — a) Inferring the goals and context of the user given a sequence of multi-modal sensor observations and b) Learning goal-conditioned system-action policies.
Goal and Context Inference for AR Assistance
We leverage self-supervised learning and multi-modal sensors (egocentric video + IMU) to understand user's goals and context such as current task and actions in AR.
ICCV EPIC Workshop 2021
Towards Goal-Conditioned Assistance in VR
We compute assistance policies for house-cleaning task in VR and evaluate it at-scale by deploying these policies in a web-based version of the AI Habitat simulator.
Episodic Memory Question Answering for AR Assistance
Can we develop egocentic AI models that can respond to a user's queries about their spatiotemporal history?
Coming soon at CVPR 2022! In collaboration with Georgia Tech.
Human-AI Systems for Robot Design
My PhD research aimed at making robotics more accessible to casual users by reducing the domain knowledge required in designing and building robots. Towards this end, I developed several interactive human-in-the-loop AI systems that enable the design of desired structure and behavior of diverse robots.
Interactive AI System for Articulated Robot Design
This tool allows novices to create custom articulated robots such as manipulators and walking robots. It supports both manual and automatic design, and enables design testing using physics-based simulation.
Interactive AI System for Non-articulated Robot Design
This tool enables novices to create smart IoT devices with embedded sensors using digital fabrication. It automatically finds assembly-aware packing of components within the device, and exports necessary geometries for 3D printing.
Semantic Design of Expressive Robot Behaviors
Can we design complex robot behaviors such as robot walking based on the emotion that the behavior evokes?
Egocentric Scene Context for Human-centric Environment Understanding from Video Tushar Nagarajan, Santhosh Kumar Ramakrishnan, Ruta Desai, James Hillis, and Kristen Grauman In review at European Conference on Computer Vision (ECCV) (2022).
arXiv coming soon!
Episodic Memory Question Answering Samyak Datta, Sameer Dharur, Vincent Cartellier, Ruta Desai, Mukul Khanna, Dhruv Batra, and Devi Parikh Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, USA (2022).
arXiv coming soon!
How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors Satoshi Tsutsui, Ruta Desai, and Karl Ridgeway International Conference on Computer Vision (ICCV), EPIC Workshop (2021).
Towards Inferring Cognitive State Changes from Pupil Size Variations in Real World Conditions Naga Venkata Kartheek Medathati, Ruta Desai, and James Hillis ACM Symposium on Eye Tracking Research and Applications (ETRA), Stuttgart, Germany (2020). PDF | bib
Human-AI Collaborative and Mixed-Initiative Systems, Robotics
Cross-Domain Imitation Learning via Semantic Skills Karl Pertsch, Ruta Desai, Franziska Meier, Vikash Kumar, Dhruv Batra, and Akshara Rai Submitted to International Conference on Machine Learning (ICML) (2022).
arXiv coming soon!
Geppetto: Enabling Semantic Design of Expressive Robot Behaviors Ruta Desai, Fraser Anderson, Justin Matejka, Stelian Coros, James McCann, George Fitzmaurice, and Tovi Grossman ACM Conference on Human Factors in Computing Systems (CHI), Glasgow, UK (2019). Best paper award (top 1%) [Details] PDF | bib | Video | Fastforward | Supplementary (PDF)
Assembly-aware Design of Printable Electromechanical Devices Ruta Desai, James McCann, and Stelian Coros ACM User Interface Software and Technology Symposium (UIST), Berlin, Germany (2018). PDF | bib | Video | Fastforward
Skaterbots: Optimization-based Design and Motion Synthesis for Robotic Creatures with Legs and Wheels Moritz Geilinger, Roi Poranne, Ruta Desai, Bernhard Thomaszewski, and Stelian Coros ACM Transactions on Graphics (Proc. ACM SIGGRAPH), Vancouver, Canada (2018).
Interactive Co-Design of Form and Function for Legged Robots using the Adjoint Method Ruta Desai, Beichen Li, Ye Yuan, and Stelian Coros International Conference on Climbing and Walking Robots (CLAWAR), Panama city, Panama (2018).
Computational Abstractions for Interactive Design of Robotic Devices Ruta Desai, Ye Yuan, and Stelian Coros IEEE International Conference on Robotics and Automation (ICRA), Singapore (2017). PDF | bib | Slides | Video
Robot models for fabrication: car, walking robot
3D Printing Pneumatic Device Controls with Variable Activation Force Capabilities Marynel Vazquez, Eric Brockmeyer, Ruta Desai, Chris Harrison and Scott E. Hudson ACM Conference on Human Factors in Computing Systems (CHI), Seoul, Korea, (2015).
Integration of an Adaptive Swing Control into a Neuromuscular Human Walking Model Seungmoon Song, Ruta Desai, and Hartmut Geyer IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan (2013).