In the future, maybe. Right now though it's very infeasible as it requires a shitton of computing power to run even semi-believably, and also has problems with addressing inputs. Like, chatbots have failed in hilarious (and sometimes dangerous) ways when they use LLMs like ChatGPT right now, so those issues will have to be ironed out before it can be properly put into games.
As for the demo itself, it's very compelling, but Replica Smart NPCs' page doesn't really explain how it works. Going by their pricing plan (which appears like you have to pay-as-you-go the more speech you generate) it seems like it's probably working by funneling voice into a voice-to-text model, piping that to ChatGPT or GPT4, and then back to a text-to-voice model to finally send to the user. That stuff is all going to be happening in the cloud - no way it can run feasibly on players' computers right now, and that also explains the two-tier pricing model ($36 for 4 hours, and then a vaguely-defined "Enterprise" plan). As such, it's going to have the same deficiencies as such LLMs in normal applications.
That is, it's entertaining if it is limited to NPCs that don't forward the storyline (since they're inconsequential) but if you had to apply the same thing to story-driving NPCs, then you'd probably eventually run into difficulties where the player has to word their request in specific ways so that the NPC interprets it as a request to forward the plot. Or on the flipside, the player can try and break the game by trying to mount attacks against the text interpretation - so that the NPC bugs out. It's not going to apply here since the entire pipeline seems to be ONLY for voice, so it's not going to have additional privileges w.r.t the game itself - but it's something worth keeping in mind if/when other games DO try implementing this themselves.