Press "Enter" to skip to content

Posts tagged as “Embodied AI”

The Embodied AI Workshop is Tomorrow, Sunday, June 20th!

centaur 0
embodied AI workshop

What happens when deep learning hits the real world? Find out at the Embodied AI Workshop this Sunday, June 20th! We’ll have 8 speakers, 3 live Q&A sessions with questions on Slack, and 10 embodied AI challenges. Our speakers will include:

  • Motivation for Embodied AI Research
    • Hyowon Gweon, Stanford
  • Embodied Navigation
    • Peter Anderson, Google
    • Aleksandra Faust, Google
  • Robotics
    • Anca Dragan, UC Berkeley
    • Chelsea Finn, Stanford / Google
    • Akshara Rai, Facebook AI Research
  • Sim-2-Real Transfer
    • Sanja Fidler, University of Toronto, NVIDIA
      Konstantinos Bousmalis, Google

You can find us if you’re signed up to #cvpr2021, through our webpage embodied-ai.org or at the livestream on YouTube.

Come check it out!

-the Centaur

The Embodied AI Workshop at CVPR 2021

centaur 0
embodied AI workshop

Hail, fellow adventurers: to prove I do something more than just draw and write, I'd like to send out a reminder of the Second Embodied AI Workshop at the CVPR 2021 computer vision conference. In the last ten years, artificial intelligence has made great advances in recognizing objects, understanding the basics of speech and language, and recommending things to people. But interacting with the real world presents harder problems: noisy sensors, unreliable actuators, incomplete models of our robots, building good simulators, learning over sequences of decisions, transferring what we've learned in simulation to real robots, or learning on the robots themselves.

interactive vs social navigation

The Embodied AI Workshop brings together many researchers and organizations interested in these problems, and also hosts nine challenges which test point, object, interactive and social navigation, as well as object manipulation, vision, language, auditory perception, mapping, and more. These challenges enable researchers to test their approaches on standardized benchmarks, so the community can more easily compare what we're doing. I'm most involved as an advisor to the Stanford / Google iGibson Interactive / Social Navigation Challenge, which forces robots to maneuver around people and clutter to solve navigation problems. You can read more about the iGibson Challenge at their website or on the Google AI Blog.

the iGibson social navigation environment

Most importantly, the Embodied AI Workshop has a call for papers, with a deadline of TODAY.

Call for Papers

We invite high-quality 2-page extended abstracts in relevant areas, such as:

  •  Simulation Environments
  •  Visual Navigation
  •  Rearrangement
  •  Embodied Question Answering
  •  Simulation-to-Real Transfer
  •  Embodied Vision & Language

Accepted papers will be presented as posters. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences.

The submission deadline is May 14th (Anywhere on Earth). Papers should be no longer than 2 pages (excluding references) and styled in the CVPR format. Paper submissions are now open.

I assume anyone submitting to this already has their paper well underway, but this is your reminder to git'r done.

-the Centaur

More on why your computer needs a hug

centaur 0

Thanks to the permission of IGI, the publisher of the Handbook of Synthetic Emotions and Sociable Robotics, the full text of "Emotional Memory and Adaptive Personalities" is now available online. I've blogged about this paper previously here and elsewhere, but now that I've got permission, here's the full abstract:

Emotional Memory and Adaptive Personalities
by Anthony Francis, Manish Mehta and Ashwin Ram

Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while maintaining a consistent personality. For short-term interactions in restricted environments, scripting and state machine techniques can create agents with emotion and personality, but these methods are labor intensive, hard to extend, and brittle in new environments. Fortunately, research in memory, emotion and personality in humans and animals points to a solution to this problem. Emotions focus an animal’s attention on things it needs to care about, and strong emotions trigger enhanced formation of memory, enabling the animal to adapt its emotional response to the objects and situations in its environment. In humans this process becomes reflective: emotional stress or frustration can trigger re-evaluating past behavior with respect to personal standards, which in turn can lead to setting new strategies or goals. To aid the authoring of adaptive agents, we present an artificial intelligence model inspired by these psychological results in which an emotion model triggers case-based emotional preference learning and behavioral adaptation guided by personality models. Our tests of this model on robot pets and embodied characters show that emotional adaptation can extend the range and increase the behavioral sophistication of an agent without the need for authoring additional hand-crafted behaviors.

And so this article is self-contained, here's the tired old description of the paper I've used a few times now:

"Emotional Memory and Adaptive Personalities" reports work on emotional agents supervised by my old professor Ashwin Ram at the Cognitive Computing Lab. He's been working on emotional robotics for over a decade, and it was in his lab that I developed my conviction that emotions serve a functional role in agents, and that to develop an emotional agent you should not start with trying to fake the desired behavior, but instead by analyzing psychological models of emotion and then using those findings to design models for agent control that will produce that behavior "naturally". This paper explains that approach and provides two examples of it in practice: the first was work done by myself on agents that learn from emotional events, and the second was work by Manish Mehta on making the personalities of more agents stay stable even after learning.

-the Centaur

Pictured is R1D1, one of the robot testbeds described in the article.