research interest
a summary of research accomplishments, current work, and future directions.
Research Vision
The goal of my research is to make the interaction between intelligent systems (e.g., robots, semi-autonomous vehicle, and digital assistants) and humans more intuitive. I believe that Contextual and Embodied Artificial Intelligence (AI) causes the boundaries between the physical and digital worlds to blur1, necessitating technological advances that prioritize human-centric control of intelligent systems. My research aims to enable a future where all humans can interact with intelligent systems instinctively, unobtrusively, and with minimal effort required to master.
I believe that the current state of systems is reactive: users provide the system with an explicit input, and the system reacts to that input. Contextual AI enables intelligent reactions; the system combines explicit user input with implicit contextual understanding. This contextual understanding is necessary for Embodied AI, which allows intelligent systems to operate in the real world. Yet, systems still require explicit user commands, thereby making interactions unnatural, cumbersome, and unsatisfactory. In an ideal world, the intelligent system would not solely rely on explicit instructions from the user but could infer the user's desires from their behavior. In other words, we need to transform systems from reactive to proactive. I believe that embedding human behavior models in optimal control strategies is crucial for this transition and for making intelligent systems universally beneficial.
To achieve this goal, I create novel computational approaches that enable a more intuitive interaction, study and explore how these technologies are perceived and used, and apply these insights to inform the design of novel methods. I believe that exploiting the notion of humans as rational, goal-driven beings, will allow us to create intelligent systems that enable implicit, thus intuitive, interactions.
Footnotes
- 1. Viewing a system as a tool, historically, we used such tools directly to manipulate variables (e.g., a hammer to drive a nail). Recently, this approach has evolved: we now engage with physical systems that communicate with intelligent agents to manipulate variables on our behalf (e.g., a user controls a Nest thermostat, which in turn adjusts the temperature). With the advent of Contextual AI (CAI) and Embodied AI (EAI), both the user and the agent directly interact with the variable. The variable becomes both the target of manipulation and the interface for interaction with the agent (e.g., the user interacts with code, serving as the interface to Copilot). This convergence of variable and interface introduces multiple challenges, most notably a trade-off between user autonomy and system automation.↩