Abigail Kensinger

COM 481: Online Communities

Wikipedia Advising Report

           Generative AI and large language models are becoming an issue in the Wikipedia community by creating unreliable original content and generating inaccurate sources. Although there are some useful ways that these models are used, such as helping with creating outlines for ideas or improving writing, but the damages that they have may outweigh these benefits and must be addressed in order to protect the integrity of the Wikipedia community and the information available to individuals. The goal of the Wikimedia Foundation (WMF) is to manage the negative effects that generative AI has on the Wikipedia community since the new uses of AI are inevitable and will be better utilized if they are integrated into the already existing system. By spreading information about how generative AI and large language models can be used in the Wikipedia community, engagement in the community can be higher as people will be equipped with the tools to use AI for better content creation, instead of using it to do the work for them. The positive effects of generative AI are wonderful for the facilitation of ideas, but its uses must be controlled by guidelines in order to reap the benefits of engagement and reliable information.

           A major threat to the Wikipedia community that generative AI and large language models pose is that they can lower community engagement from individuals. People engage with online communities when the benefits of doing so outweigh the costs. In the case of generative AI, the cost to participants is that they may feel that their original work is not valued as much in the space. This cost is significantly more harmful than the benefit of using generative AI because it discredits original human work, which simultaneously pushes active participants away from the community and discourages new users from joining. As a solution to this problem, it would be beneficial to instill a system that encourages the use of generative AI in certain ways that does not take away from the engagement of human participants in the Wikipedia community. To do so, WMF should encourage using generative AI for producing ideas for content or grammar and spell checking. Generative AI may provide unreliable information and should always be analyzed for accuracy, and WMF should emphasize this to Wikipedia users. Implementing guidelines that allows for this type of use of generative AI while prohibiting its use for the generation of original content would encourage participants to create more content and contribute to the community in general. Wikipedia already prohibits the use of generative AI in creating original content, but the encouragement of using it to help facilitate quality human-created content will help to integrate the tool into the platform. This can also be monitored by gaining feedback from participants on how generative AI helps them contribute content to the community. By staying current in knowing how generative AI is affecting the Wikipedia community, WMF will be able to assess engagement and modify guidelines in order to maintain high engagement levels from participants.

           Additionally, generative AI threatens the Wikipedia community by providing users with unreliable information when it comes to content and sources. Generative AI pulls from many data sources that it has been trained by in order to create content. These sources can be biased when providing information, reflecting biases in media and society in general, which can lead to inaccurate representation of generated content. A major problem with using this tool to generate original content is that it can be unreliable or inaccurate and may also provide a user with inaccurate sources for information. It is important to find quality sources to draw information from when contributing to articles or spaces in the Wikipedia community, and quality articles incorporate many quality sources. Generative AI is not proficient enough to be able to provide sources and references suitable for Wikipedia content, and therefore it should not be used to do so. To combat this problem, WMF should create a verification system that will reference information available on Wikipedia that monitors new contributions to the platform. This will guarantee that the standards of the community, such as upholding quality of content and ensuring reliability are met. Similarly to an AI detecting software, this verification system will scan new contributions for the use of AI, while also checking their reliability, which will help keep information in the Wikipedia community up to high standards.

           In conclusion, generative AI is a tool that presents both new opportunities, while also posing some challenges to the Wikipedia community. By prioritizing user engagement on the platform and maintaining important Wikipedia standards such as quality and reliable information, generative AI can be integrated as a resource for learning, rather than a replacement for participants. WMF can navigate these challenges by emphasizing the use of generative AI as a tool to help facilitate new content contributions and scanning these new contributions with the goal of maintaining the aforementioned standards of Wikipedia. With these goals in mind, the future of generative AI will not be a threat to content creation, but instead will be beneficial to the Wikipedia community.