Because that doesn't seem to make sense. It could make sense if we read O as an operator (so f is the upper bound on g, but the definition is the other way around) but saying that a function is equal to an upper bound is just odd, even leaving aside the fact that the definition uses "f(x)", i.e., the application of f, to express the function itself.
Some textbooks use set membership notation instead: f(x) ∈ O(g(x)). This makes sense if you think of O(g(x)) as the set of all functions with that upper bound. However, the (abuse of) equality notation is often more convenient. For example, f(x) = x^2 + O(x) reads “f(x) is x^2 plus (something bounded by) a linear term”. Writing f(x) = x^2 + g(x) where g(x) ∈ O(x) is more of a hassle.
Anything using = for a relation that is not symmetric is cursed IMO. More generally, any relation using a symbol that is visually vertically symmetric should be symmetric.
Naively, one would assume that that implies O( f(n) ) - O(n) = O(n^2), which is not correct. However, O( f(n) ) = O(n^2) + O(n) implies O( f(n) ) = O(n^2), which would have been obvious if proper notation had been used.
It's a convention. Same with small-o and e.g. Taylor polynomials in first-year calculus. Writing e.g. sin(x) = sin(a) + (x - a)cos(a) + o(x - a). We used an equals sign, even though the meaning of this string of characters is "belonging to a class" and not "being equal". It's kind of convenient for long calculations.
Do people really write f(x) = O(g(x)) ?
Because that doesn't seem to make sense. It could make sense if we read O as an operator (so f is the upper bound on g, but the definition is the other way around) but saying that a function is equal to an upper bound is just odd, even leaving aside the fact that the definition uses "f(x)", i.e., the application of f, to express the function itself.