You can already do that through the config file, you can define custom endpoints for any openai compatible API. So you can get openrouter, or even local models via vLLM or alternatives. I think someone even tried to get cheaper API pay-as-you-go usage by hitting their "bulk" API, for tasks that run over night (so no need for immediate responses).