Wikibooks:Artificial Intelligence
![]() | This page contains a draft proposal for a Wikibooks policy or guideline. Discuss changes to this draft at the discussion page. Through consensus, this draft could become an official Wikibooks policy or guideline. |
The following draft policy outlines the Wikibooks community's perspective on the use of artificial intelligence-generated content on this site.
Text generation
[edit source]Large language models (LLMs), often referred to as "AI chatbots" or simply "AI", can be beneficial in certain circumstances. However, like human-generated text, machine-generated text can also contain errors or flaws, or even be entirely useless. In particular, requesting a language model to write a book or an essay can sometimes cause the production of complete fabrications, including fictitious references. The output can be biased, libel living people, infringe on copyrights, or simply be of poor quality. This might not pose a large risk for very rote tasks within closed communities; however, these issues can quickly become problematic in large communities and environments where knowledge transfer, verifiability, accountability, and critical thinking are important. In particular, the high volume and speed at which LLMs can generate content, all of which would need to be verified, means that their use poses an outsize risk. As such, LLMs may not be used to generate or summarize material and ideas at Wikibooks, and their sources should not be blindly trusted.
Translation
[edit source]LLMs may not be used for translation of content. Please see Wikibooks:Content translation for further information.
Media
[edit source]Many AI tools have the capacity to create media, particularly images, from prompts. If you are interested in uploading this media, please be aware of relevant policies on licensing at our sister project Wikimedia Commons or our local policy on images, depending on whether you are uploading it here or there.
Required disclosure
[edit source]Any permissible content made with the help of an LLM must be explicitly marked as such in both the edit summary and the page's discussion page. The following information must be provided:
- The date of generation/addition
- The tool and tool version used (e.g. Gemini, ChatGPT, Midjourney)
- The prompt(s) fed into the tool
This applies to every instance of using AI content. If you create new prompts and incorporate them into a page multiple times, each instance must be documented, including on the talk page.
Detection and enforcement
[edit source]As of this policy's creation, there are no reliable, high-quality tools capable of detecting AI-generated materials. Instead, editors will have to be on the lookout for various issues, such as:
- Illogical or meaningless sentences
- Sentences, phrases, or arguments that seem coherent on the surface but do not hold up to scrutiny
- Word changes that inappropriately change the meaning of a sentence
- Citations or sources that do not match a claim
- Phrasing that suggests it was generated in response to a prompt
- Low-quality or flawed images
All of these issues, however, can occur without the use of generative AI tools. If you detect these issues, you should first engage with the contributor in good faith to point out and address the problematic content, with the goal of resolving the issues. If good faith discussion and guidance fails, or if repeated, unambiguous violation of this policy is found, problematic editors may be subject to warning and subsequent editing restrictions.
Use of copyright violation detectors (e.g. Earwig's Copyvio Detector) can be used to help identify text copied verbatim from online sources.
Policy updates
[edit source]Because the field of widely accessible "AI" and generative models is still young, this policy may need to change over time to best serve the project. When needed, updates should be proposed and discussed on this policy's talk page.