Powered by Smartsupp

AI-Powered Memory Breakthrough: Wearables and Robotics Gain Visual Recall with New Infrastructure



By admin | Mar 16, 2026 | 3 min read


AI-Powered Memory Breakthrough: Wearables and Robotics Gain Visual Recall with New Infrastructure

Shawn Shen is convinced that for AI to truly thrive in the physical world, it must possess the ability to remember what it sees. His company, Memories.ai, is leveraging Nvidia's AI tools to construct the foundational infrastructure that will allow wearables and robots to store and retrieve visual memories. This ambition was underscored by a newly announced collaboration with semiconductor leader Nvidia at its GTC conference on Monday. Through this partnership, Memories.ai is utilizing Nvidia’s Cosmos Reason 2, a reasoning vision language model, and the Nvidia Metropolis application for video search and summarization, to advance its visual memory technology.

The initial development of AI glasses sparked a crucial question for the team: how would people use such technology if they couldn't recall the video being recorded? After searching for existing visual memory solutions for AI and finding none, they made the decision to spin out from Meta and build the capability themselves. "AI is already doing really well in the digital world; what about the physical world?" Shen remarked. "AI wearables and robotics need memories as well. … Ultimately, you need AI to have visual memories. We believe in that future."

The concept of memory for AI systems is a relatively recent development. OpenAI updated ChatGPT to begin remembering past chats in 2024 and refined that feature in 2025. Similarly, Elon Musk’s xAI and Google Gemini have launched their own memory tools within the past two years. However, Shen notes these advancements have primarily concentrated on text-based memory. While textual memory is more structured and easier to index, it is less practical for physical AI applications that interact with the world predominantly through sight and visuals.

Memories.ai was founded in 2024 and has secured $16 million in funding to date. This includes an $8 million seed round in July 2025, followed by an $8 million extension. The investment was led by Susa Ventures and included participation from Seedcamp, Fusion Fund, and Crane Venture Partners, among others.

Shen explained that successfully creating this visual memory layer required tackling two core challenges: building the infrastructure to embed and index videos into a storable, recallable data format, and capturing the necessary data to train the model. The company introduced its large visual memory model (LVMM) in July 2025. Shen described it as comparable to a smaller version of Gemini Embedding 2, a multimodal indexing and retrieval model released earlier this month.

For data collection, the company developed LUCI, a hardware device worn by its "data collectors" to record training videos. Shen clarified that they have no plans to become a hardware company or sell these devices; they built their own because commercial video recorders, focused on high-definition, battery-intensive formats, did not meet their specific needs.

The company has since released the second generation of its LVMM and has signed a partnership with Qualcomm to run its models on Qualcomm’s processors, starting later this year. Memories.ai is already collaborating with several major wearable companies, Shen added, though he declined to name them.

While there is existing demand, Shen envisions even greater future opportunities in wearables and robotics. "In terms of commercialization, we are more focused on the model and the infrastructure, because ultimately we think the wearables and robotics market will come, but it’s probably just not now," Shen stated.




Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!