As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
The bigger issue is that these types of bans feel a lot more like banning speech than banning a real crime, and the precedent it sets can end up being used in far-reaching ways. That’s how it always is.