FindingDory: A Benchmark to Evaluate Memory in Embodied Agents

1Georgia Institute of Technology, 2Oxford University
FindingDory Benchmark Overview Figure

Abstract

Large vision-language models have recently demonstrated impressive performance in planning and control tasks, driving interest in their application to real-world robotics. However, deploying these models for reasoning in embodied contexts is limited by their ability to incorporate long-term experience collected across multiple days and represented by vast collections of images. Current VLMs typically struggle to process more than a few hundred images concurrently, highlighting the need for more efficient mechanisms to handle long-term memory in embodied settings. To effectively evaluate these models for long-horizon control, a benchmark must specifically target scenarios where memory is crucial for success. Existing long-video QA benchmarks overlook embodied challenges like object manipulation and navigation, which demand low-level skills and fine-grained reasoning over past interactions. Moreover, effective memory integration in embodied agents involves both recalling relevant historical information and executing actions based on that information, making it essential to study these aspects together rather than in isolation. In this work, we introduce a new benchmark for long-range embodied tasks in the Habitat simulator. This benchmark evaluates memory-based capabilities across 60 tasks requiring sustained engagement and contextual awareness in an environment. The tasks can also be procedurally extended to longer and more challenging versions, enabling scalable evaluation of memory and reasoning. We also present baselines that integrate state-of-the-art VLMs with low level navigation policies, assessing their performance on these memory-intensive tasks and highlight areas for improvement.

Benchmark Tasks

FindingDory Benchmark Tasks

Results

Main Results

Proprietary VLMs such as Gemini-2.0-flash and GPT-4o fail to perform appreciably across most FINDINGDORY task categories. Agents struggle significantly on multi-goal tasks where multiple subgoals are to be achieved sequentially based on the interaction videos collected during Phase 1 of the task (sometimes in a specific order). The Qwen-SFT baseline (see Appendix D.5) which is trained to predict ground-truth frame indices performs best but still reaches only ≈ 50% on the high-level tasks. This highlights a considerable gap in the spatio-temporal reasoning capabilities of frontier models.

Main results showing performance of different VLMs on FindingDory benchmark

Video Subsampling Results

We analyze how the number of input frames affects high-level performance by evaluating agents on subsampled interaction videos of varying lengths (Fig. 4a). This helps assess whether models benefit from longer histories or simply overfit to sparse visual cues. Interestingly, we observe that frozen VLMs—despite being capable of accepting long contexts—do not improve with more frames. In fact, their performance often degrades at higher frame counts (minimal subsampling), suggesting that they struggle to extract relevant information from densely packed, unfiltered input. This highlights a key limitation: na¨ıvely increasing context length does not help unless the model can effectively attend to localized events. In contrast, the fine-tuned model (Qwen SFT) demonstrates clear gains when trained with longer videos, leveraging additional context to improve reasoning. This indicates that with appropriate supervision, models can move beyond shallow matching and utilize richer temporal signals from the interaction history.

Video subsampling results comparing different sampling strategies

Low Level Policy Results

When Qwen is combined with an ImageNav or Mapping agent, the overall SR and SPL reduces significantly (seen in the LL-SR/LL-SPL metrics)

Low level policy results showing performance comparison

Qualitative Results

Example Video Sequence

Example Video Sequence with 5 Collected during Phase 1 of FindingDory task

Agent Trajectory

Bird's eye view of the agent trajectory showing the path taken during the various pick-place interaction routines.

Bird's eye view of agent trajectory during FindingDory task