Since the release of ChatGPT in November 2022, one of the main concerns regarding this chatbot is its impact on education. Assistant professors dr. Laura Ripoll Gonzalez and dr. Francisca Gromm茅 are currently exploring the use of Artificial Intelligence (AI) by students in education. They consider how AI affects education, ethical issues, and the sort of guidance students need from the university, among other questions. Students are of a generation that grew up in the digital era adopt it into their routines quite easily, but they do face some ethical concerns.
ChatGPT is a form of generative AI (gen AI). This artificial intelligence lets users enter prompts to create original content, such as texts, images, videos or audio. Generally, students use the chatbot to revise texts, generate ideas, create texts, or retrieve information. Even though technology like ChatGPT can be enriching, it also comes with constraints and challenges.
Potentials of chatbots
Gromm茅 explains that students also like the possibility of using a chatbot to reflect on their work. 鈥淯sually, their idea is not to generate a final product but to involve it in many steps in writing. ChatGPT works as a mirror for their work.鈥
Another advantage is that AI contributes to working more efficiently. Ripoll Gonzalez mentions, "It is possible to train a chatbot on a specific data set. You could then search by keywords to quickly scan for specific information, which saves time. This can assist more technical aspects of literature reviews, for instance", although both agree it also has disadvantages. Gromm茅 points out: "Working like this attunes to the idea that we should work faster and put less effort into our work. I doubt if we can get satisfaction from working like this."
Plagiarism is the biggest concern
Many students wonder how they can use it in an integer and safe way. Plagiarism is one of their main concerns. 鈥淭he ethical use of Generative AI is not only a concern for students, but also for academics鈥, Ripoll Gonzalez says. 鈥淚 might use ChatGPT to help with suggestions to fine-tune my writing but not to generate ideas. Students may also do so. However, it is hard to discern when they submit their work to us, and the plagiarism software detects AI-use, whether it is still the students鈥 original idea鈥.
AI echo chambers
Another disadvantage, according to Ripoll Gonzalez, is ChatGPT being an echo chamber. Chatbots are inclined to echo the views of their users so they can feed our own bias back to us. People tend to ask chatbots specific questions, but on traditional search engines, people are more likely to use keywords. A complete question may unintentionally contain a prejudice since chatbots can be trained to extract clues from questions, so you can expect an answer that reflects your point of view. 鈥淚n this case, language can be used against you鈥, Ripoll Gonzalez says.
Gromm茅 adds that 鈥渢hese chatbots can also be biased because human beings have trained them. They learn from data, and their outcomes will be too if the data is biased.鈥 Ripoll Gonzalez elaborates a little bit further on this topic by pointing out that 鈥渢here is the additional issue of which information is fed to the language model, and whether we are respecting data privacy and copyright, for instance, when students use ChatGPT to process data from interviews as part of students鈥 thesis projects. This is an epistemological issue, redefining our relationship with technology as both users and creators of knowledge.鈥
Not all knowledge is available on the internet
Another disadvantage, according to Ripoll Gonzalez, is that we have a fallacy that all is available on the internet. 鈥淏ut that鈥檚 not true. Indigenous knowledge, for example, is not on the internet. Our students should not just trust that what is on the internet is enough.鈥 Most information still comes from the Global North. 鈥淭his also makes me think of this ethical issue of the paid environment of ChatGPT鈥, says Gromm茅. 鈥淧aid access gives you access to better resources than your fellow students in other countries or other parts of the world who don鈥檛 pay for it. Generally, universities in the global north have more money. This results in disparities in knowledge between various universities and countries.鈥 Inequalities can develop among students because some can afford paid access while others cannot. Inequalities can develop among universities because wealthier universities can afford licenses for tailored AI applications.
Creating guidelines
Gromm茅 explains that students are generally aware of the ethical issues regarding AI in education. 鈥淭hey are aware of the possible copyright violations, the bias, that sources can be scrambled up, that private companies are trying to profit from what we are doing, etc.鈥 She sees that students are willing to take responsibility for it if we keep having open and transparent conversations about it. 鈥淚n this way, we can set up some guidelines that we can use and keep on developing together. We think having open conversations with the students about using AI in education is important.鈥 In any case, the university has already drawn up a basic user guideline for Generative AI and believes that plagiarism occurs when a student uses AI software without the examiner's permission.
Regulating AI is difficult (Collingridge dilemma)
Every technological change brings progress as well as problems. Hybrid working arrangements and video calling, for example, changed our norms for how we collaborate and what we expect from each other. However, regulation and management still need to catch up with these changes. The same applies to almost all innovations, including AI. It鈥檚 possible to change the direction or regulate new technologies. But it is hard to know what the effect of the technology will be. Once the effect is apparent and the technology is embedded in society, it becomes harder to change it. This is called the Collingridge Dilemma.
Regulating AI is very difficult, Gromm茅 explains. 鈥淥ur norms are already changing with technology before we make the rules. And when we make the rules, they don鈥檛 fit the norms anymore鈥, she explains. It is possible to successfully regulate a given technology when it鈥檚 still young and unpopular. 鈥淭herefore, we must act quickly and keep talking to each other鈥.
Having listed all these disadvantages, amplifying the importance of open conversations is essential. 鈥淭ogether with students, but also among universities. Maybe even with chatbot developers鈥. In this way, we can guide our students on how to use it responsibly. ChatGPT is already embedded in society. Therefore, it is essential as a university to regulate it and create clear guidelines鈥, Gromm茅 says.
- Assistant professor
- Assistant professor
- More information
This interview is part of Spark. With these interviews, we aim to draw attention to the positive impact of the faculty's education and research on society. The stories in Spark give an insight into what makes ESSB students, alumni, staff and researchers tick.
Contact: Britt van Sloun, redactie en communicatie ESSB, vansloun@essb.eur.nl