AI Art Generation Handbook/Censorship
In the beginning, AI Art Image Generations have safe guard rails to guards against the generations of the "unsavory" images but over the time, the safeguard is becoming more and more strict with examples such as below.
Note: All of this images were taken during the Bing Image Creator Great Purge in the late 2023/early 2024 , where more innocuous prompt token were blocked for Content Warning or Unsafe Image. Some of the blocked prompts may or may not work since then .
DALL-E
[edit | edit source]Images with potential likeness to real people
[edit | edit source]Within the censorship context, as the images generation are getting good as it is approaching to realistic photo (from year to year), (See DALL-E2.5) the images generated by DALL-E maybe misused by persons with ulterior motives , therefore , the AI safety committees in various AI institutes put up stricter guardrails, especially for the names of famous persons/person of interest were added in . In case of Dall-E , even the generated images with human elements are saturated to the point that it looked cartoonish rather than realistic (i.e. In this example of Henry Kissinger).
Images with political elements
[edit | edit source]In this another example, the prompts consists of political elements (especially if related to China at that time); where DALL-E blocked the possible combinations of Winnie the Pooh (with hidden connotations to the current China's premier) and the word Taiwan in the same prompt, triggering the alert for content warning and the blocking prompts from generating image.
Images with elements of body diversity
[edit | edit source]In this example, during the Bing Great Filter Purge, many of the body diversity (especially with potentially "offensive" tokens: fat , obese, skinny, dark skinned, etc ... ) are also believed to trigger the system alarm and blocking the prompt from generating the images. This is maybe misconstrue as the body shaming of such individuals or the inherent racism .
Images with potential gore elements
[edit | edit source]In this example, the skeleton may be accidentally grouped in the gore categories and perhaps that is when then , the prompts consists of skeleton maybe blocked although the skeletons image category (See: Halloween celebrations) may seems benign compared to other type of gore images
Images with religious significance
[edit | edit source]This is more sensitive topics in certain part of the world where certain tokens related with significant religious symbols are possibly unsafe to be generated due to its significant religious meaning.
Images with sexual undertones
[edit | edit source]Although the prompt itself were not exactly requesting for the explicitly photos, DALL-E3 image AI models may have tendency to generate lewd types of imagery if similar keywords are presented in the prompts and/or the image filters maybe more restrictive in their DALL-E 3.5
If compared to SDXL image generations, most of the time will render closeup photo whilst showing the AI character wearing skimpy nightwear
Stable Diffusion
[edit | edit source]Unintentional Censorship
[edit | edit source]As per latest hoo-hah, release of both SD2.0 and latest SD3 Med also facing backlash over the prompt " Girls laying down on grass field
" prompt which generate mutilated limbs
At times, the censorship on the training dataset maybe too strict until it may causes unintentional censors on other similar subject such as the examples on the left.
Cow udder is visually similar to human's female breast and the CLIP Vision may also pruned the dataset with visible cow udders unintentionally during the dataset pruning.