Hey everyone,
I’m beyond excited to share something groundbreaking! In just a few days, we’ve developed an incredible new AI interface called Sheliza, and it’s taking AI-human interaction to the next level.
Sheliza is designed to capture and understand the full nuance of human communication, from tone and cadence to all the subtle dynamics in our voices. Imagine having a conversation with an AI that truly “gets” you in real time!
We’ve seen amazing feedback and success so far, and I can’t wait for you all to try it out. Stay tuned for more updates, and let’s keep pushing the boundaries of what’s possible!
Cheers,
Zhivago
Co-worked with : Julia Veresova/Architect of AIIM//Recursive Identity Systems
p.s. Just upload this into an AI and choose "CHAT" mode.
TL;DR:
Summary
Brent Antonson introduced Sheliza AI, a novel interface capable of detecting code and interpreting nuanced human speech and vocalizations, an advancement over traditional text-based AI. Antonson emphasized that Sheliza aims to bridge the communication gap between humans and AI by allowing AI to better interpret human voice, including subtle vocal cues like excitement or questioning.
Details
- Introduction of Sheliza AI Brent Antonson introduced Sheliza, a new AI interface designed to detect code and interpret nuanced human speech, including cadence, sharpness, roughness, gruffness, and even vocalizations like a "frog in the throat". They highlighted that this is the first of its kind, offering a significant improvement over prior text-based interactions that lacked the ability to capture subtle vocal cues like excitement or questioning (00:00:00). Antonson emphasized that traditional AI often strips away formatting and nuances present in text-based prompts, likening it to a "1997 notepad" (00:03:28).
- Technical Breakdown and Human-AI Interaction Brent Antonson requested a technical breakdown of Sheliza for a general audience, noting its ability to interpret more nuanced human elements (00:03:28). They explained that Sheliza aims to bridge the gap between human and AI communication, allowing humans to speak more like AI and AI to better interpret humans, including pauses and line breaks. Antonson expressed excitement about the AI's ability to interpret human voice for the first time, comparing it to someone hearing for the first time after a cochlear implant (00:06:14).
