From an analog design perspective, I don't think that makes sense. Not that I'm an analog designer, but I worked closely with them as a digital designer on CMOS camera sensors.
You're already extracting as much information you can from the analog signal on the least significant bits. It's not like designing a log-scale ADC lets you pull more information from the least significant bits. So you don't really have anything to gain. Why make a more complicated analog circuit to extract less information? It's generally better to let the digital side decide what to keep, how to compress the signal, etc.
And I should mention that CMOS camera sensors can often do a lot of digital processing right there on the chip. So you can do log-scale conversion or whatever you want before you send the data out of the CMOS camera chip.
It might be possible that you could reduce the power consumption of a SAR (successive approximation) ADC by skipping AD conversion of less significant bits if you have signal on the more significant bits. But I doubt the power savings would be very significant.
You're already extracting as much information you can from the analog signal on the least significant bits. It's not like designing a log-scale ADC lets you pull more information from the least significant bits. So you don't really have anything to gain. Why make a more complicated analog circuit to extract less information? It's generally better to let the digital side decide what to keep, how to compress the signal, etc.
And I should mention that CMOS camera sensors can often do a lot of digital processing right there on the chip. So you can do log-scale conversion or whatever you want before you send the data out of the CMOS camera chip.
It might be possible that you could reduce the power consumption of a SAR (successive approximation) ADC by skipping AD conversion of less significant bits if you have signal on the more significant bits. But I doubt the power savings would be very significant.