I'm taking one of the tests and the feedback seems strange: although it appears I got it right, because the selected answer turns green, many of the other answers turn red (which usually indicates a failure) and other unselected answers also turn green. It's confusing.
I find it an interesting point of view on the current risks of AI, and on how the current development of LLMs interacts with our own biases and our economic system, generating potentially serious problems.
It doesn't seem reasonable to me to assume that the main objective of any government is to protect its citizens.