Not really. It would take immense effort to train bots to play “like humans” and not “as performantly” as humans which is very different. And if you’re going to be optimizing game parameters that means you’re assuming that either the AI doesn’t change its behaviors even though the game is different or you’re assuming that humans will adapts in the same way the bots do.
Like if all the humans use the AK because it’s super over powered, and your optimization algorithm sets the AK damage to 0; what are your “human” bots going to do? Because all the training data says to use the AK.
This approach only makes sense if you’re evaluating bot-optimal play outcomes.
It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)
> Not really. It would take immense effort to train bots to play “like humans” and not “as performantly” as humans which is very different.
There is precedent in Maia Chess, which does a good job of mimicking human chess players at various ELO ratings. Of course, it's a lot more difficult to extrapolate to games with significantly more state/movesets, but I imagine that this space will be further explored in the near future.
> And if you’re going to be optimizing game parameters that means you’re assuming that either the AI doesn’t change its behaviors even though the game is different or you’re assuming that humans will adapts in the same way the bots do.
This could be addressed by including the game parameters of interest (what map, what character, the weapon stats at time of gameplay, etc.) in the input to the training data.
> It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)
Tool-assisted QA is nothing new. Using AI is a newer iteration of the concept. You still have to interpret the results it gives and make decisions based on that. The design thinking isn't replaced, it's augmented with additional insights. Are those insights potentially inaccurate? Sure, but you can account for that with sanity checks/manual intervention/play testing.
You’ve largely missed or ignored my point. If you change the game your AI either fails to adapt or adapts in an AI way. But it can’t reliably adapt in a human way without data on how humans adapt to the change. That’s just not how it works. Maia won’t mimic humans hypothetical behavior if you make it so that bishops can also move like knights.
There’s nothing wrong with using bots in play test data but you shouldn’t ever expect an optimization algorithm to generate fun balanced game mechanic stats based on their behaviors by tweaking things until the bots are evenly matched.
Like if all the humans use the AK because it’s super over powered, and your optimization algorithm sets the AK damage to 0; what are your “human” bots going to do? Because all the training data says to use the AK.
This approach only makes sense if you’re evaluating bot-optimal play outcomes.
It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)