Consider the problem you are trying to solve. I/O devices, from the CPU's standpoint, are things that are very slow and require infrequent service, but when they do require service, have extremely strict hard-real-time latency requirements. Interrupts are one approach to being able to get useful work done while waiting for I/O, without burning a lot of CPU resources on polling.
Another approach is to have a processing hierarchy, like old mainframes did. Off-load the CPU with some kind of I/O processor or channel controller that can do the real-time data transfers, and "coalesce" low level interrupts into a single larger interrupt that captures more work -- think a single DMA-COMPLETE interrupt instead of a bunch of GET-SINGLE-BYTE interrupts.
You can of course push the processing into hardware but that is much harder to change than an I/O driver, so the interrupt-driven-driver design pattern wins on software maintainability.
Another approach is to have a processing hierarchy, like old mainframes did. Off-load the CPU with some kind of I/O processor or channel controller that can do the real-time data transfers, and "coalesce" low level interrupts into a single larger interrupt that captures more work -- think a single DMA-COMPLETE interrupt instead of a bunch of GET-SINGLE-BYTE interrupts.
You can of course push the processing into hardware but that is much harder to change than an I/O driver, so the interrupt-driven-driver design pattern wins on software maintainability.