Multimodal AI Agents That Can Plan, Reason And Explain
BRIDGEWATER, N.J., March 19, 2024 /PRNewswire/ -- Openstream.ai, the leading provider of multimodal neuro-symbolic Conversational AI solutions for visionaries, today announced it has expanded its portfolio of intellectual capital with the grant of a new patent comprising a system and method for cooperative plan-based utterance-guided multimodal dialogue. AI Virtual Assistants, AI Avatars, and AI Voice Agents created with Openstream's Eva™ (Enterprise Virtual Assistant) platform can understand end-user intentions by leveraging multimodal inputs and context to infer user's plans to dynamically generate human-like dialogue in real-time across any channel without a script or hallucinations.
- AI Virtual Assistants, AI Avatars, and AI Voice Agents created with Openstream's Eva™ (Enterprise Virtual Assistant) platform can understand end-user intentions by leveraging multimodal inputs and context to infer user's plans to dynamically generate human-like dialogue in real-time across any channel without a script or hallucinations.
- The sophistication, intelligence, and potential for the variety of conversations that multimodal AI virtual assistants can engage in, leaps forward with this patented approach.
- AI agents created with Eva can engage in complex, multimodal, multi-turn, multi-obstacle conversations continually informed by context.
- Revang continues: "Our system goes far beyond intent-slot, and frame-based systems - to reason and plan using very complex constraints and domain knowledge."