Depending on the hypothesis space what you call the "uninformative prior" does not exist in the frequentist approach. If you search for a real value, then the uninformative prior is a uniform distribution on the infinite line. This distribution does not normalize and is off-limits to bayesians.
Ultimately, I think you are strawmanning frequentism here. Just because the log likelihood is sometimes the same as the map does not imply that they have the same meaning. This is why computed uncertainties of both approaches are often not the same and have a not-so-subtle difference in their interpretation. The one computes uncertainty in belief, the other imprecision of an experiment. You can't summarize that with "do you want to be explicit about assumptions".
Nobody normalizes an uninformative prior on its own. Normalization only happens when you get your posterior.
I am not straw-manning anything: bayesian methods are a generalization of frequentist methods. The equality / isomorphism to frequentist methods, in the special case of the uninformative prior, is commonly demonstrated in introductory (undergrad-level) bayesian textbooks and is in fact trivial. One need not even talk of any infinities: if you’re doing discrete observations, infinities never show up (and in that case the prior normalizes just fine). And if you’re curve-fitting (ie.: using parametric methods), the “infinite” line goes away as soon as you multiply your uninformative prior with your likelihood.
Ultimately, I think you are strawmanning frequentism here. Just because the log likelihood is sometimes the same as the map does not imply that they have the same meaning. This is why computed uncertainties of both approaches are often not the same and have a not-so-subtle difference in their interpretation. The one computes uncertainty in belief, the other imprecision of an experiment. You can't summarize that with "do you want to be explicit about assumptions".