I mean it's great that people are figuring out LLM biases but looking at each individual question and the spread of answers seems to support the theory that companies aren't biasing their models (or at least failing to do so) when different generation models from the same company flip their "stance" on certain issues.
But at the same time, I don't think asking these models how they feel about constitutional republics or abortion is useful for anything other than researchers who have a reasonably unaligned model trained on recent internet dumps who want a kind of mirror into public discourse.
But at the same time, I don't think asking these models how they feel about constitutional republics or abortion is useful for anything other than researchers who have a reasonably unaligned model trained on recent internet dumps who want a kind of mirror into public discourse.