Large vision-language models have recently demonstrated impressive performance in planning and control tasks, driving interest in their application to real-world robotics. However, deploying these models for reasoning in embodied contexts is limited by their ability to incorporate long-term experience collected across multiple days and represented by vast collections of images. Current VLMs typically struggle to process more than a few hundred images concurrently, highlighting the need for more efficient mechanisms to handle long-term memory in embodied settings. To effectively evaluate these models for long-horizon control, a benchmark must specifically target scenarios where memory is crucial for success. Existing long-video QA benchmarks overlook embodied challenges like object manipulation and navigation, which demand low-level skills and fine-grained reasoning over past interactions. Moreover, effective memory integration in embodied agents involves both recalling relevant historical information and executing actions based on that information, making it essential to study these aspects together rather than in isolation. In this work, we introduce a new benchmark for long-range embodied tasks in the Habitat simulator. This benchmark evaluates memory-based capabilities across 60 tasks requiring sustained engagement and contextual awareness in an environment. The tasks can also be procedurally extended to longer and more challenging versions, enabling scalable evaluation of memory and reasoning. We also present baselines that integrate state-of-the-art VLMs with low level navigation policies, assessing their performance on these memory-intensive tasks and highlight areas for improvement.
Proprietary VLMs such as Gemini-2.0-flash and GPT-4o fail to perform appreciably across most FINDINGDORY task categories. Agents struggle significantly on multi-goal tasks where multiple subgoals are to be achieved sequentially based on the interaction videos. The Qwen-SFT baseline which is trained on the training set of FindingDory performs best but still reaches only ≈ 50% on the high-level tasks. This highlights a considerable gap in the spatio-temporal reasoning capabilities of frontier models.
We analyze how the number of input frames affects high-level performance by evaluating agents on subsampled interaction videos. This helps assess whether models benefit from longer histories or simply overfit to sparse visual cues. Interestingly, we observe that frozen VLMs—despite being capable of accepting long contexts—do not improve with more frames. In contrast, the fine-tuned model (Qwen SFT) demonstrates clear gains when trained with longer videos, leveraging additional context to improve reasoning.
When Qwen is combined with low-level policies (ImageNav or Mapping agent), the overall Success Rate and SPL reduces significantly (seen in the Low-Level (LL) metrics).
Example video with 5 rearrangement sequences during the experience collection phase.
Bird's eye view of the agent trajectory showing the path taken during the various pick-place interaction routines.
@article{yadav2024findingdory,
title={FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
author={Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
journal={arXiv preprint arXiv:2506.15635},
year={2025},
url={https://arxiv.org/abs/2506.15635}
}