I could see that working. Just depends on your use case and what you want to accomplish.
The approach the author takes has the advantage that when data is in the load table it can then be used to load into a real table via some transform in batches and be done in the background.
Gitlab is surprisingly good for that. I keep backup schedules in it executed as CI pipelines. Gitlab runner executes my ansible custom code in docker container using image prepared for that purpose.