Since the release of ChatGPT we’ve seen the renewal of a very long-running debate about the impact of new information technology on education. Some argue that interactive large language models will encourage students to cheat or subvert the development of their critical thinking. Others advocate the benefits of LLMs while creating innovative ways to engage students in their education.
While both sides of the debate have some merit, the winner has already been decided. Historically and technically speaking any fight to prevent the use of these tools is a losing battle. The only productive response is to embrace a future that includes these tools.
Fears Of The Past
This article is centered around an historical cliche. New technology emerges. New technology makes something easier. People worry that it will make kids less capable. Radio, TV, calculators, computers, the internet, and now interactive large language models have all lead to the widespread concern for the attention span and educational outcomes of children.
Nearly 100 years ago, some worried that radio would lead to a decline in literacy. For decades educators and psychologists have studied and debated the impacts of TV on cognitive outcomes. Calculators were going to undo math education. Personal computers faced a similar concern. And the internet has long been blamed for many societal ills.
The common thread here is being wrong. While there is some science to the claim that too much media can have various negative impacts on kids, in most cases above the actual overall impact was improved educational outcomes.
Radio and TV exposed whole generations of kids to educational programming, news, and mature themes to which kids in prior generations had never been widely exposed. Calculators very quickly became a fixture in STEM classes allowing students to do basic math quickly and freeing up their attention for more complex ideas. Computers made typing a universal skill and made writing more accessible. And the internet has simply been its own educational revolution.
Truth vs Fiction
The internet is maybe the best example here. Over the past 30 years there have been endless articles, op/eds, and theses written about the benefits and problems of the internet. Did the internet lead to a decline in educational outcomes? Are students easily fooled by misinformation or are they taking advantage of the greatest reference in history?
In fact what has happened is that the internet is both the greatest reference and also a motivation to teach kids about the nature of factuality. It is now common for students to learn about the importance of sources and the likelihood of intentional misinformation. And that’s a good thing. Also common is discussion about the dead end of plagiarism and how doing your own work is the only path to a positive educational outcome.
The same can and will be said about ChatGPT and similar LLMs. Like a calculator or the internet, LLMs can do work for you. They can help you cheat. And students likely to cheat will cheat. But that’s not the story. The big picture is ultimately that like disruptive tools before them LLMs will create new opportunities for learning.
ChatGPT: The Next Big Thing
One of the least understood elements of this moment, as LLMs permeate our culture, is that a Pandora’s Box has been opened. Once the internet was in everyone’s home, no one could really prevent students from using it. The only path left was adaptation. In the same way those concerned about the spread of LLMs for economic or ethical reasons just aren’t going to be able to slow it down.
A Big Hit
ChatGPT is now the fastest adopted software of all time. It had 1 million users in 5 days and now has more than 100 million. It’s free and maybe the easiest software to use ever created. It can accomplish an incredible amount of tasks. It’s immediately useful and accessible to anyone old enough to type a question and read an answer. It can speak multiple languages and adjust its output to match specific education levels.
In other words, it’s really compelling. Many find it quickly more productive to use than online search. Depending on the topic it can generate one correct answer instead to leading you to many links you have to dig through. And the natural language interface makes it feel more like a digital assistant than on online reference.
Many industries are already beginning to replace people doing tasks in writing and basic administration. The list of jobs that are threatened by LLMs is long and has many people worried about their careers. While projections of this disruption have varied wildly, it’s likely that we will see at least 1% of jobs be replaced by LLMs in the next year or two.
In addition to replacing jobs, LLMs are threatening to upend the search industry. For example, since the debut of ChatGPT traffic to the coding reference website Stack Overflow has dropped 14%. This site is most often accessed through Google searches. And the buzz around ChatGPT is that it’s a ‘Google killer‘ because everyone who uses it is finding it a lot more productive to generate specific answers than to dig around through search results of varying quality.
The surprisingly immediate result has been the development or introduction of LLM tools by nearly every major player in tech. Microsoft integrated GPT into their office suite and browser. Facebook and Amazon have LLMs in development. And Google is working on an LLM search assistant.
Whether for clicks, ethical concerns, industrial stability, or simply fear of the future, we’ve seen a high tide of resistance to the growth and use of interactive large language models. We’ve also seen an incredible amount of misunderstanding, lack of curiosity, and commercial agendas behind this resistance. And so for many reasons a majority of mainstream coverage of LLMs has been inaccurate, biased, or deceptive.
A common concern expressed about LLMs like ChatGPT is that they are trained on data that has been stolen from writers and artists. Large language models running generative AI software require a huge amount of data and many companies have simply scraped the internet for content with no concern at all for copyright.
While this concern is legitimate, for example in the case of MidJourney clearly using images with the Getty Images watermark, it is not inherent to the technology. Other companies are currently creating generative AI applications that are trained on legally acquired data. For example, Shutterstock is offering an LLM trained on its repository of images.
The ethical concern around LLMs are an important issue that is being hotly debated by many from the academy to social media.
LLMs can demonstrate biases based on the data used to train them. And these biases can have negative impacts on people depending on how they are used.
There are also significant concerns about privacy and accountability when it comes to the data used to train LLMs. If the data is not acquired with a respect for the interests of those included in it then LLMs trained on that data may violate privacy or damage reputations and businesses.
Most likely due to the potential financial impact of this technology on entire industries we are seeing a huge swell of negative press that seems clearly intent on slowing down the development and use of LLMs. While there are many legitimate reasons, stated above, to be concerned, this coverage is simply inaccurate to the point that bias is clear.
When you write an article about ChatGPT about something it said, but ChatGPT itself would say that the question wasn’t an appropriate prompt based on its abilities, some kind of agenda is pretty clear. And unfortunately we’ve seen a flood of these kinds of fairly anti-intellectual articles recently.
It’s certainly not a stretch to think that businesses likely to lose market share would be running negative PR campaigns.
Maybe the most widely expressed concern is that LLMs can be used by students to do their school work. Concerns about cheating are being widely discussed around this technology. Anxiety is strongest around English instruction and written essays as it has become nearly impossible to prevent the use of services like ChatGPT.
Teachers are struggling to come up with an effective response to this new challenge. Some are doubling down on attempts to detect cheating using various AI detectors. Others are embracing LLMs and creatively engaging the technology in the service of their instruction.
But … no response to the concerns above can stop the advancement and widespread use of LLMs. Game over.
Slowing down the development and use of LLMs is about as unlikely as it was that we could have slowed down adoption of the internet. It’s like arguing that people should still go to the library to look things up in physical encyclopedias when they could just search for answers on Google. Some technological innovations just change the game overnight. This is one of those moments.
The short-term outcome of all this growth and innovation is the near impossibility of slowing it down. Any concerns we have about its use or potential for harm can only be mitigated by creative responses. For example, if we are concerned about students using it to generate their school work, we must know that it’s not in any way possible to prevent that. The genie is out of the bottle. The cat is out of the bag. The landscape has simply changed and the only logical response is adaptation.
So how do we respond to such a huge disruption to traditional methods of instruction?
The public school system certainly isn’t known for being agile. And we are seeing many school systems attempt to prevent the use of LLMs, even as that’s just not possible. It would seem that teachers are essentially on their own to adapt to this changing landscape.
When the internet was new there was very little information available about how it could facilitate instruction. Even the idea of a website wasn’t fully formed. This was the ‘wild west’ of digital interaction and people creating this early internet only had conventions from print media, computers software, and their creativity to draw upon.
We are in a similar moment with LLM’s. While there are no end of instant ‘experts’ with something to say about it, we really have very few who can claim expertise in the field of AI assisted education. As a result this is a time of uncertainty, anxiety, excitement, and experimentation. A paradigm shift is happening.
There is currently an exciting burst of creative energy and production going on in this space. In online forums like the ChatGPT for Teachers group on Facebook you can find a steady stream of new ideas, projects, websites, and businesses that are offering teachers innovative options for the use of LLMs in the classroom.
It makes sense that most would find all this change a bit stressful. Teaching is already challenging enough without having to reinvent your approach without qualified guidance. And yet there really are no experts to turn to at the moment. So instead we need to embrace this moment of change, connect with others doing the same, and try some new things to see what works.
Here are some recommendations about how to facilitate constructive use of LLMs for students.
- Think of LLMs as a tool students can use to facilitate their own learning process and make it more fun. Help them see it as a digital assistant.
- Consider the educational value of students learning how to prompt an LLM to get a desired result.
- Have students generate content using an LLM and then have them assess its validity or quality.
- Have students use LLMs and assess their ability to generate different kinds of information.
- Have students generate writing with an LLM and then have them rewrite that in their own words.
- Create big assignments centered around the editing process that would be challenging to accomplish without the help of an LLM.
- Ask ChatGPT what it can do? Be specific about your needs and concerns. Ask follow up questions. Really pick it’s “brain.”
- Join groups and follow people on social media who clearly have educational experience and don’t seem to be overly commercial in their focus.
- Don’t pay for anything. None of this should cost you a dime. There are some services that may be worth paying for, but you shouldn’t make that decision till you’ve seen what’s available.
- Don’t go down the prompting rabbit hole. You don’t need a list of 500 prompts. LLMs are a natural language interface. You really should be able to simply ask it for what you need.
- Be ready for errors. LLMs can be wrong. They can get confused. If things get off track start a new chat and ask a different way.