Episodic Memory Question Answering





1 Georgia Tech     2 Meta Reality Labs Research     3 Meta AI Research

Abstract


Egocentric augmented reality devices such as wearable glasses passively capture visual data as a human wearer tours a home environment. We envision a scenario wherein the human communicates with an AI agent powering such a device by asking questions (e.g., ``where did you last see my keys?''). In order to succeed at this task, the egocentric AI assistant must (1) construct semantically rich and efficient scene memories that encode spatio-temporal information about objects seen during the tour and (2) possess the ability to understand the question and ground its answer into the semantic memory representation. Towards that end, we introduce (1) a new task — Episodic Memory Question Answering (EMQA) wherein an egocentric AI assistant is provided with a video sequence (the tour) and a question as an input and is asked to localize its answer to the question within the tour, (2) a dataset of grounded questions designed to probe the agent’s spatio-temporal understanding of the tour, and (3) a model for the task that encodes the scene as an allocentric, top-down semantic feature map and grounds the question into the map to localize the answer. We show that our choice of episodic scene memory outperforms naive, off-the-shelf solutions for the task as well as a host of very competitive baselines and is robust to noise in depth, pose as well as camera jitter.




Video





Paper

Episodic Memory Question Answering

Samyak Datta, Sameer Dharur, Vince Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, Devi Parikh

CVPR 2022 (Oral)


      @inproceedings{datta2022episodic,
      title={Episodic Memory Question Answering},
      author={Datta, Samyak and Dharur, Sameer and Cartillier, Vince and Desai, Ruta and Khanna, Mukul and Batra, Dhruv and Parikh, Devi},
      booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
      year={2022}
      }
    




Code + Data


Coming soon!



Acknowledgements

We are grateful to the developers of PyTorch for building an excellent framework. The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor. The webpage template was borrowed from https://embodiedqa.org .