• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    The patent itself spends a considerable amount of time documenting the technical features of how input data is processed, but I’ll be focusing on the game design implications here since they’re more interesting.

    Inworld Origins hinges around the player roleplaying as a detective and asking AI NPC characters questions, with answers generated on-the-spot based on that input.

    It’s so possible that recently, official English anime voice actors were openly speculating that they had been dubbed over with AI by Namco Bandai in the latest Naruto Ultimate Ninja Storm game.

    Most video game playable characters already function primarily as self-inserts from a narrative perspective, but making that connection literal seems like a quick way to break immersion.

    Considering the sheer scope of Electronic Arts and the funding available to them, I think they should just stick to hiring actual people for video game voiceover, especially for plot-critical characters.

    Adopting technology like this seems like a pretty sharp step backward from an industry that’s been selling video games to us with face-scanned screen actors for the better part of the past decade.


    The original article contains 473 words, the summary contains 180 words. Saved 62%. I’m a bot and I’m open source!