Think of this as Netflix building its own very smart "taste brain" that understands movies, shows, images, and text, then wiring that brain into all the ways it personalizes what you see — rows, artwork, search, and more — instead of relying on a bunch of separate smaller brains.
Traditional recommendation and personalization systems at Netflix relied on many separate models (for ranking, images, search, etc.) that were harder to maintain, less consistent, and slower to adapt to new content and use cases. A single, unified foundation model for personalization lets Netflix reuse a common intelligence across many applications, improve recommendation quality, accelerate experimentation, and reduce engineering overhead across teams.
Proprietary, multimodal foundation model trained on Netflix’s unique viewing, content, and metadata; deeply integrated into core personalization workflows and experimentation platform, creating strong organizational and data moats.
Hybrid
Vector Search
High (Custom Models/Infra)
Inference latency and cost for applying a large multimodal foundation model across billions of recommendation and personalization requests, plus vector index scaling and freshness for rapidly changing catalog and user behavior.
Early Adopters
Unlike generic LLM wrappers, this is a domain-specific, multimodal foundation model tightly coupled to Netflix’s recommendation stack and experimentation platform, enabling shared representations across many personalization tasks (ranking, search, artwork selection, explanations) rather than a single narrow use case.