5 Policy
Developments in generative AI are fast: what seems to not work today, may be already implemented next week. For this reason it is important to realize that generating policy on the basis of what students and teachers are or are not allowed to do with generative AI may be vulnerable to future developments. Such policy, while structured today, may prove not to be robust with respect to the future.
It is better to define policy that guides the use of generative AI based on ubiquitous scientific and societal values, such as fairness, openness, transparency, accountability and responsibility.
5.1 Durably embedding generative AI in academia
The following policy suggestion ensures that generative AI is implemented in an academic setting by means of human moderation. It places the user in control of responsible interaction with the AI tool and warrants conscientious further implementation of the output generated by the AI tool. The suggestions clearly separate both input and output of tools, since both are easily prone to misuse. Following the suggestions below minimizes the opportunity for unlawful, unethical and unfair use of AI tools in academia.
Suggestion 1: Minimize the use of AI tools as they are (currently) environmentally unfriendly
Many users are unaware of the impact that contemporary AI tools may have on the environment. Together with e]’g. cloud storage and e-mail traffic, AI tools constitute a hidden carbon footprint that often escapes our awareness. While it is not as apparent as airline travel, the impact of using AI tools may be far greater than you think (Berthelot et al. 2024). While the environmental impact may be significantly lowered by on-chip generative AI , there will always be a cost of using AI tools. Many people have thought about how AI will impact human life and Hollywood has monetized its threat to human existence. Not many may have realized that our lives may be at risk through AI-induced global warming.
Suggestion 2: Don’t input confidential or personal information
This may seem intuitive, but the ability of AI tools to mimic human-like interaction may lull the user into the situation of disclosing more information than allowed. It is not hard to imagine a scenario where a well-engineered prompt would allow for the identification of the person who interacts with the AI tool, or worse - other people that are not aware of their personal ideas or identifying information being disclosed. While the AI tool itself seems anonymous and may not be sentient, any user should be aware of potential information or attribute disclosure.
Suggestion 3: Don’t input information that violates IP or copyright
When interacting with AI tools, users should protect intellectual property rights and copyright. When submitting prompts to AI tools it is paramount to ensure that
- you are allowed to share the information in the prompt or have explicit permission
- you are not infringing on any right associated with the information in the prompt
Suggestion 4: Don’t violate IP with using output from the tool
Likewise, it is paramount to realize that no intellectual property or copyright may be infringed with using the output of the AI tool. The training of the AI tool happened on a large set of data; some of that data may have been used illegitimately. By using AI generated output you may plagiarize existing work or otherwise infringe on intellectual property rights.
This is a tricky scenario, as the nontransparent training of AI tools makes it challenging to prove that no IP is infringed with the realized output. Given that the other suggestions are not violated, however, one could argue that embedding AI tools in a normal scientific knowledge discovery scenario would minimize the change of any infringement. Such a route would result in a process where information from multiple sources is processed and curated by an actual human.
Suggestion 5: Confirm the output accuracy
Never put all your eggs in one basket. AI tools have demonstrated to yield inaccurate, incomplete or false information. This can happen when the AI tool perceives patterns or objects that are nonexistent, resulting in nonsensical or inaccurate output, often referred to as hallucinations (Ji et al. 2023; Alkaissi and McFarlane 2023; Athaluri et al. 2023), although some resistance against that term has emerged (Østergaard and Nielbo 2023). It is important to always use multiple sources to verify any bit of information. As a user of AI tools you should accept that only you are responsible for using, interpreting and curating AI-generated output.
Suggestion 6: Check the tool output for bias
AI tools are prone to biased output due to the language and bias that are incorporated in the training resource (Duin and Pedersen 2021; Bockting et al. 2023). AI tools may therefore produce output that may impact real humans and their rights, identifications, characteristics or status. Be careful not to input promts that are likely to generate biased output and do not use AI-generated output if potential bias may occur.
Suggestion 7: Don’t do bad or illegal things with AI tools
Never use AI tools for illegal purposes or malicious intent, such as creating viruses, hacking networks or any form of computer crime or other crime.
Suggestion 8: Disclose the use of AI tools in your work Always disclose the use of AI tools, also when the work you generated is derived from AI-generated output. Always be transparent about your work.
Suggestion 9: Ask the tool not to use input for training As a safety precaution it may be good practice to prevent further training of the models from input data. This avoids risks of accidentally disclosing information that should not have been send as input.
Suggestion 10: Open the content you produce by using AI tools as much as possible with permissive licenses. Opening up your research is a good practice in general.
5.2 Why are these suggestions necessary?
User interaction with AI tools may seem as an anonymous and private encounter, but it is far from that. The possibility of uploading materials to and interacting with a seemingly human-like tool in the privacy of the own home may lull the user into a false sense of security. Kumar (2023) illustrates this wonderfully in a case study of a hypothetical professor. When your interaction with an AI tool feels secure, sharing private or rights-protected materials may seem inconsequential, but such practice is nonetheless unlawful. At the same time it is not realistic to forbid any generative AI interaction in academia: Knowledge advancement and experimentation are foundations of scientific practice. For academic institutions it is therefore very important to have a policy in place that allows for human-moderated use of AI tools, such that they can be embedded in a normal scientific evidence-based workflow. I believe that the same policy should apply to faculty and students to ensure a common community-driven conscientious implementation and application of human<->AI interaction, much in the same way as we implement and broadcast good scientific conduct.