AI Voice Platform that enables perfectly timed lip-sync to digital humans will be showcased at the upcoming Game Developers Conference.
Replica Studios, an artificial intelligence (AI) voice technology company, today revealed plans to integrate with Unreal Engine’s MetaHuman Creator, allowing developers to give their MetaHumans unique sounding AI voices with matching lip-synch and face animation. Replica’s platform enables game developers, animators, and filmmakers to use AI voices for digital characters. Additionally, they are creating a library of hundreds of AI voice characters with unique sounding voices for developers to tap into for game and animation narratives.
Quality lip-syncing can make or break visual storytelling in games and digital content, and as developers add a cast of characters to a narrative, the lip-syncing challenge becomes all the more complex. Replica Studios’ integration with Unreal Engine’s MetaHuman Creator provides developers from AAA to indie studios with AI voice-acted characters in addition to accurate lip-synching and face animation, speeding up the animation process and freeing up more time for more creative experimentation with game narrative. Developers can animate digital characters in minutes by capturing their own voice using a recording app, then using the audio to direct the performance of the AI voice, so that it matches with the face mocap and lip movements of the MetaHuman Creator.
Replica Studios will showcase its AI voice technology at the annual Game Developers Conference (GDC) next week. Developers can attend the GDC session to learn more about its AI voice actors for MetaHuman Creator on Monday, July 19th, 2021 from 2:00pm – 2:30 pm Pacific Time.
Shreyas Nivas, Co-Founder and CEO of Replica Studios, said: “We’re about to see a major transformation in the way game narratives are created and experienced by audiences. Traditionally, animators manually animate the face and lip movements to match the voiceover, which gets recorded separately, often losing days or weeks to this process over the lifecycle of a project. Replica’s Voice AI platform enables text-to-animation pipelines to render dialogue and narration within minutes. While we’re only letting in early access developers into our AI voices beta program for Unreal Engine MetaHuman Creator, soon millions of game developers and animators with access to the MetaHuman Creator will be able to access our library of AI voice actors to give their MetaHuman Creator avatars a voice. These lower barriers to entry and faster times to market mean that we’ll be seeing a shift to higher quality, more narrative driven, more realistic looking and sounding games sooner than you’d expect.”
Replica Studios’ AI voices for MetaHuman Creator will feature:
- Unreal Engine MetaHuman Creator voiced by Replica Studios’ AI voice actors, with accurate lip-syncing.
- Replica Studio’s library of 40+ AI voice actors that range from expressive game characters to narrators and storytellers with a diverse range of accents and speaking styles.
- Text to Synthesized Speech. Type what you want your avatar to say and receive the synthesized speech in seconds.
- Fine-tune the AI voice actor’s performance with controls to apply moods, emphasis, pitch, sound effects and other characteristics where needed.
- Capture and save multiple takes and access a history of saved dialog lines for future adjustments and iterations