Amazon has launched the Nova Multimodal Embeddings, a cutting-edge model designed for agentic retrieval-augmented generation and semantic search. This model distinguishes itself by integrating various content types—text, images, documents, video, and audio—into a seamless framework, allowing users to perform crossmodal searches with unprecedented accuracy.
Traditional embedding models generally focus on one content type, limiting their applicability in scenarios where mixed-modality content is common. Nova overcomes these limitations by maintaining a unified semantic space for all supported content types, thus simplifying the extraction of insights from diverse unstructured data. The model supports extensive features including a context length of up to 8K tokens, input in up to 200 languages, and both synchronous and asynchronous API calls, making it versatile for various application needs.
Additionally, Nova embeds robust AI ethics capabilities, including content safety filters and fairness measures. This capability encourages responsible and bias-reduced use in applications. Available in Amazon Bedrock, Nova Multimodal Embeddings aids organizations in unlocking value from unstructured multimedia assets while optimizing search and retrieval processes.
👉 Pročitaj original: AWS Blog