New AI model can index a month's worth of footage
Memories.ai just dropped LVMM 2.0, a new memory model optimized to run natively on Qualcomm processors that's set to land in devices from 2026.
It processes and organizes captured video, audio, and images on-device for local, privacy-preserving search, reducing reliance on the cloud.
It combines all your media into 1 searchable memory
LVMM 2.0 combines all your media into one searchable memory, letting you find moments in seconds across phones, cameras, or wearables.
Unlike models with short video-context limits, this model can handle unlimited footage and can index a month's worth of footage using modest device storage.
LLaMA 2.0 outperforms Google Gemini, ChatGPT
LVMM 2.0 outperforms Google Gemini and OpenAI's ChatGPT in video search and question-answering tasks with way more context, and since it runs on your device, it means faster results, lower costs, and better privacy.
Its tech mimics how we remember things visually, but does it at machine speed.