More about generative AI
It is an important part of AI literacy and difgital competence to have knowledge of how generative language models such as ChatGPT and other generative AI work to understand the possibilities and limitations of the tools. Read more here.
How does generative AI work?
Simply put, large language models are trainied to predict likely combinations of sentences or images and completely ignore what is true or right. Producing factually correct content or presenting facts that is not what language models themselves are trained to do, but just to create text that looks good. They work in the same way as predictive text on the mobile, or on the computer, simply put. The University of Gothenburg has produced a film that explains how a language model like ChatGPT works in a good way. In itself, a language model is only good at putting together sentences that sound good, and it is thus not trained to account for facts, but the text may well contain inaccuracies. This is sometimes referred to as "hallucinating". Some language models are connected to the internet or allows the use of uploaded material, which makes the reliability much greater.
The training of the language models has been conducted on a very large amount of data from many different types of sources. Exactly which sources that have been used in training is not known. Based on the data used in the training and how generative language models work, source-critical use becomes extremely important.
Generative AI is more than just language models, it can also be image creation, videos, music, and more. Development is moving forward quickly and new variants are released continuously. Different AI tools have different merits and abilities.
What is generative AI?
Perhaps without knowing it, we all encounter artificial intelligence (AI) in different forms every day. AI is nothing new. When you unlock your phone with facial recognition, it's AI, or when you take photos with your phone, AI is used to figure out the best focus and settings, and when you use voice assistants like Google Home or Siri, AI is used to figure out what you want. You get recommendations for movies or series on streaming services, suggestions for answers to emails, and you (usually) avoid spam in your mailbox thanks to AI. Recently, however, the focus has come to be on what is known as generative AI, and above all language models such as ChatGPT.
Generative AI is a form of artificial intelligence that is trained to generate output based on massive training data based on prompts/questions asked by a user. Examples of generative AI tools include ChatGPT, Dall-E, Elicit, GitHub Copilot, and Perplexity. AI in various forms and also generative AI is, and will remain, a part of our lives and we all need to relate to this. This includes being able to make wise and informed decisions and also make decisions about how and when AI can be used. It requires each of us to inform ourselves about what AI is and how it affects us in our various roles. Being able to handle this can be said to be part of what is called AI literacy, but is also part of digital competence and digitalization competence. Exploring and using generative AI yourself is a good way to develop both your digital skills and your digitalization skills.
For those of you who meet university students in teaching, this means, among other things, that you need to have insight into and knowledge about how your students will encounter AI in their future professional roles and therefore update course syllabi and teaching based on that. You also need to think through how you talk about generative AI services in your course and how students are expected, or not expected, to use these and how you can guide them in their use. This applies both to their own studies and during examinations. Clarify your stance on this and academic honesty in general, but also think about how you and your students can benefit from AI. It can be easy to fall into the trap of just thinking that students can potentially use generative AI services to write assignments and take-home exams, but this is about so much more than that. Be clear about your expectations regarding the use of AI. Below you will find material that you can tell your students about.
Source criticism and bias
For students, it is extra important to remember to be critical of sources and aware that there may be biases that have been reproduced in what is created with AI. Based on the data used in the training and how generative language models work, source critical use becomes extremely important because the models are only trained to generate a text that sounds good, or an image that looks good. This means that emphazis is not on correctness in the text or image. This means that the content can be distorted and misrepresent what is true. For example, AI tends to attribute certain occupations to people of certain genders, or certain characteristics to people of a certain nationality or skin color. By far the majority of the data that the models have been trained on comes from the English-speaking part of the world, and an anglicized view of the world is thus the dominant one when it comes to what is being created by AI tools right now. This is due to the data that the AI has been trained, which mirrors what exists in our world and since discrimination and biases exist, this will be found and in some parts even be enhanced by the AI.
MidSweden University library has produced material to help students with source criticism. You can find this in the learning platform Moodle and in MiunSkills, and you are also welcome to the library with questions and thoughts about source criticism.
Important questions to ask yourself when using language bots in relation to teaching:
- Is it good pedagogy to use AI in this way?
- Do you miss any learning by using the tool?
- Does this follow the applicable governing documents?
- Am I about to send sensitive information?
- Are there language errors?
- Are there factual errors?
- Are there skewed perspectives or bias?
Don't lose sight of the purpose of what you're doing. Ask yourself regularly if what you are exploring actually leads to better teaching and learning. Not everything has to provide better teaching right away, but technology exploration should not be a purpose on its own. A tip when deciding whether an AI tool can be used or not is to use the acronym ROBOT: Reliability Objective Bias Ownership Type. Ask yourself if you can trust what you are generating, if it is objective or contains distorted information, who is responsible for the tool and what that means and what kind of tool it is - it may not be suitable for what you want to do.
Accounting for the use of generative AI
When an AI tool is used, this should be accounted for. How this should be handled in different contexts is something every institution or department should discuss and talk about - and also communicate to students. However, it is not possible to designate an AI tool as a co-author of a publication, since a person, natural or legal, is always responsible for the content of the publication. When using a generative AI service in your work, it can be useful to think about the service that you used as a colleague and to some extent compare it to peer review procedures. You are always responsible for the content you generate. Here you must also be aware that text generated with AI services can contain biases and inaccuracies. If students are not trained and made aware of this, the risk of them using AI in an unethical way increases. Read more about academic integrity and AI in ENAI Recommendations on the ethical use of AI in edcucation. Tell your students how to use generative AI, if you do. Reflect on and formulate your attitude towards students using AI in your course, for example in the study guide, in the course room, and during lessons.
Are you a student and considering using AI? First, read more about generative AI, cheating and academic honesty and much more.
Some tips for keeping up with AI developments
- Try out some AI tools. By testing for yourself, you can form your own idea of the possibilities and limitations. It is a goog idea to try several different tools in addition to ChatGPT.
- Book workshop or consultation, alone or together with colleagues, to get guidance on how you can work with AI in your particular subject, or what type of tools can suit what you want to do. Book by contacting Educational development at PUkontakt@miun.se or www.miun.se/boka-pu
- Have collegial discussions about how you, within the subject or programme, can think about AI as a resource, but also in examinations and how AI will affect your particular subject or field, as well as how you view the permitted and unauthorized use of AI tools. Also talk about how you will deal with students refusing to use AI (perhaps for ethical reasons, or something else).
- Test your exam assignments/questions/take-home exams with text-generating AI tools and discuss the results with colleagues. How can you change your examinations to ensure that they are legally secure and relevant based on course objectives and learning activities? Some hints on how you can think about examinations in order to design them in a way that increases academic integrity can be found below.
- Think about what is relevant knowledge in your subject and area, both today and tomorrow. What do students need to know?
- Teach source criticism. Students need to be aware of the risks associated with uncritically using AI services. Feel free to discuss and test together to educate and de-dramatize. A tip for working source critically is to let students compare information they find in different types of sources like AI tools, through googling and through literature and then discuss their experience.
Contact the group for educational development if you have questions or concerns about generative artificial intelligence and higher education.
Contact
