Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?
For seeing whether a given statement holds true, minimizing complexity is the most important.
> Typically CS cares about the process to find a result, and not just its value nor the possibility or impossibility to find it.
Typically, programmers care about the process to find the result. Some programmers are computer scientists. You can not conclude that all computer scientists are programmers, so your statement is based on logical fallacy.
Computer science still has computational complexity theory and analysis of algorithms as sub-fields, with complexity classes and Big O notation. These very much care about the process to find results, which was the basis for my statement :-) The fallacy is on your interpretation of what I said.
There are results in theoretical computer science that don't care about the efficiency in the process; the most essential are the computability theorems exploring what parts of mathematics can be computed, and what we mean by computation anyway.
But the most essential part -the halting problem- was resolved long ago, so the lion's share of applications of computing science is in finding practical ways to use computers, which again needs to take efficiency into account.
> Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?
Efficiency and tractability of representation is also an issue, though, and mathematicians (and programmers) do care about that, a great deal. That's why the lambda calculus has been used as the basis for proof assistants such as Coq, whereas Turing machines have not.
For seeing whether a given statement holds true, minimizing complexity is the most important.