From the insides of a compiler, imperative and procedural have little meaning. C turns into a purely functional programming language called static single assignment form very easily.
Actually, when you start thinking about everything like your compiler does, they stop having meaning in real life too!
If you want to make a language easier to optimize, you need to reduce runtime decisions the program might have to make and incidental details about the program the compiler has to preserve[1]. That's "it". I think the closest languages for some problems would be Julia, array languages, OpenCL, etc. Not so much Ruby…
[1] Like function signatures, structure layout, error behavior on bad inputs, memory aliasing, dynamic method calls, integer overflows, array overflows, stuff like that.
Ruby is the language I like best to work in, but you're right. When I started Ruby, I wanted everything to be as static as possible, and I still do. What seduced me with Ruby was how clean it reads to a human.
But as someone who writes compilers as a hobby, Ruby is also a massive challenge.
Even reasoning about a simple Ruby program as a human is hard unless you make a lot of assumptions about how a reasonable developer will behave.
What does "2 + 2" do? In Ruby you technically don't know without knowing what's lead up to that point. Some trickster might have overridden Fixnum#+ to return "42" on every addition. Or format your hard drive. And if there's a single "eval" with user controllable input before that point, you can't statically determine what it will do, because that trickster may or may not be present at the keyboard for any specific run.
Instead a JIT will need to be preferred to de-optimize and re-optimize code differently if you want efficiency, or an AOT compiler will need to add all kinds of guards and fallback paths to handle the crazy alternatives wherever whole-program analysis is insufficient to rule out the tricksters..
And here's the kicker: common Ruby libraries use eval() all over the place. Often for inconsequential things that a compiler could in theory figure out. But not without a whole lot of extra analysis.
For ahead-of-time compilation there are a ton of further challenges, such as the practice of executing code to alter the load path and executing code to determine exactly what "require" calls are made to pull in code. 99% of the time it results in a static set of "require" calls and you could just compile it as normal. That 1% it's a plugin mechanism, and the intent is for you to load and eval() code at runtime.
Ultimately very few mainstream languages have such a fluid distinction between read/compile time and runtime as Ruby, and are as hard to statically analyse as Ruby.
That's part of what makes it fun to try to ahead-of-time compile Ruby, but also drives me crazy at times when trying to figure out how to do it semi-efficiently.
A lot of this can be resolved with relatively small tweaks, or even just current language features that most people don't use. E.g. if you were to do "Fixnum.freeze", the class can't be modified any more.
Now we know what 2+2 means, assuming we know what Fixnum looked like at the point of freezing.
One can make it vastly easier to reason about, and optimize, Ruby just with careful application of a few things like that. Unfortunately, because current Ruby implementations does not reward behaviour like that, nobody takes those steps.
Actually, when you start thinking about everything like your compiler does, they stop having meaning in real life too!
If you want to make a language easier to optimize, you need to reduce runtime decisions the program might have to make and incidental details about the program the compiler has to preserve[1]. That's "it". I think the closest languages for some problems would be Julia, array languages, OpenCL, etc. Not so much Ruby…
[1] Like function signatures, structure layout, error behavior on bad inputs, memory aliasing, dynamic method calls, integer overflows, array overflows, stuff like that.