New technology always has positives and negatives. From the introduction of the computer in a classroom to the use of different devices to aid learning, it has a big impact and sometimes raises difficult questions. Often simple solutions are then adopted: websites are blocked, smartphones banned. But is that the solution? All this new technology brings new ethical issues, and they should certainly have a place in the classroom.
Almost every VO school will have an example where a photo or video is blown up through social media. The drama is great, and often totally incalculable, both for the perpetrators and the victims. Or how about arguments that get completely derailed digitally? Without seeing each other, it's a lot easier to say something, than when you're facing each other. Also inconvenient: as a teacher, you have something online that you really would rather others not see, and this ultimately affects being able to do your job.
All tricky things, where technology plays a central role and we actually have to think about the impact it has on our lives. At the same time, these are just "little" things involving ethics, because if you look a little further, there are much bigger things at play, which our students can certainly relate to.
Technology is often presented as something positive, and in many cases it certainly makes life easier and more enjoyable. But it also sometimes has consequences that you don't immediately overlook. For example, Facebook is very convenient to be in touch with the whole world, until we found out that all your data is being used. A dishwasher is actually a kind of robot that takes over our work, but what if a robot will soon really take over? There are so several great topics where you can ask challenging questions and talk to students.
Online surveillance: big data and social media have made it possible to create a profile of almost everyone with an awful lot of information. That information is now mainly used to make money from ads. That's already not good, because do you still have privacy online? And if you think things through: what if your searches are linked to your health insurance? Who do you trust the information to or not? Who all actually has access to your information? And how do you know and can you control this?
AI (Artificial Intelligence): AI is getting stronger: computers are slowly but surely starting to learn to learn on their own, without human intervention. This means that at some point there will be true artificial intelligence. What if this AI becomes smarter than humans? And if we fear that now, shouldn't we make certain agreements? What kind of agreements then? Can an AI actually provide solutions, for example, as a judge or as a minister? Suppose AI makes a wrong choice, who will be responsible?
(Un)dependence on technology: We are increasingly dependent on technology, and maybe that's getting too bad now. Are our lives still getting better, or are we too glued to our screens? Doesn't technology (and thus the companies behind it) determine too much of what we do and how we live? What if this technology goes away, will we still be able to fix it? Doesn't technology cause more problems than it solves? And will there still be work, when more and more work is replaced by robots?
Such big themes are actually important for everyone: thinking about them helps you make better choices. These themes are also good to discuss with students, preparing them to ask the right questions when needed. We don't yet know exactly how and in what way all this technology is going to affect students' lives, but that it will have an impact is certain.
So as a teacher, it is good to get your students thinking about these topics as well. It may sound like "just another thing", talking about something that is vague and sounds like the future. Yet that's not so bad, because you may already be more engaged with these themes than you think. What it's mostly about: open up the conversation with open-ended questions to which you don't know the answers yourself either. Below we have some more tips for getting started with ethical issues surrounding (the influence of) technology:
Don't set "digital ethics" as a separate topic in your curriculum: it's just about the additional questions that certain topics or assignments already raise. It's actually a line that runs through everything.
Digital ethics is also not just stuck in classes that use computers. Many questions fit into society subjects, data and algorithm issues can find a place in mathematics. It's about something that affects everyone continuously, and in that way can also be woven into a school's curriculum. And what about discussions about copyright? Or online privacy?
Ethics is not about "you should always" or just "you should never": it is about complex issues where nuance is important. Show students how you yourself struggle with certain issues: for example, do you have a Facebook account yourself? And what are your considerations?
Start small: ask questions that make students think. For example: why do you use a certain app and not another? What kind of photo would you never share? What kind of photo would you never forward? Who would you want to hack and what do you think you might find?
The difficult thing about the ethical side of dealing with new technology is that it is also new to us as teachers. Often you're used to teaching something you're really familiar with. Then when it comes to the impact of new technology on our lives, suddenly it's something we're all learning right now. By thinking about this with students, we not only prepare them better for society. We ourselves as teachers also think a little better about what is important to us. A great challenge, but that's what education is also about, right?
This article can also be found in the Youth and Education file
source: Innovative