Why would they bother? That's a terrible way to approach it.
Just pass legislation requiring in-country datacenters that can be decrypted by thoughtcrime enforcers, like Russia and China are doing. Trying to get this done via a CSAM list that's absurdly closely audited would be a huge waste of time and not provide any significant benefit, and if such a request were ever made public, would likely result in severe political and economic sanctions.
That's what everyone's missing in this argument. There's no need to be all underhanded and secretive when you can just pass laws and conduct military-backed demands upon companies using those laws. Trying to exploit the CSAM process would be a horrifically bad idea, and would result in public exposure and humiliation, rather than the much more useful outcome that simply passing a law would provide.
Without the technology deployed, Apple can (and did) say they don't have the ability to break into users' phones.
If Apple deploys on-phone scanning, governments can just tell Apple to support a new list. It won't be the NCMEC CSAM list. It will be a "public safety and security" list. I wouldn't rule out underhandedness either. [1]
Apple already has technology deployed to perform binary file scans of every file on macOS and iOS, and the ability to at any time release signatures for those scans, that are very difficult for normal users to prevent updates for. They've had that for years, maybe even a decade by now, and so far to date we have seen no abuse of that list.
How is Apple's new CSAM list somehow increasing the chances of Apple going rogue, given that we've all been living with that risk for the past X years?
Each system is closed source, provides a mechanism for checking content signatures against files on disk, and is thought to report telemetry to Apple when signatures are found.
How is CSAM scanning new and different from those existing closed-source systems?
I'd say the primary differences are that the CSAM scan is a perceptual hash rather than a regular file hash, and that the technical infrastructure of the CSAM system is designed from the ground up to be used against (rather than for) the user and report them individually to authorities for violation.
Do you have an alternate design in mind that is both "used for the user", and is also effective at reporting CSAM content being uploaded from the device, without allowing CSAM abusers to opt-out of that reporting? I haven't been able to come up with anything myself, but maybe you've had better luck.
China forced Apple by legislation to implement new iCloud algorithms for assigning China-region user data into China-hosted datacenters. Most countries, unlike the US, are not constrained by a requirement to only exercise previously-built mechanisms and not create new ones, in response to government demands. If China decides to require Apple to censor non-CSAM content on-device, they will do so whether or not CSAM content fingerprinting exists. That China has not done so is because they benefit greatly from Apple's manufacturing and sales and do not wish to create a diplomatic incident with Apple.
How long until an group of governments tells Apple to add Tank Man to the list?