October 14, 2023
I probably watch too many movies. That being said, a lot of movies have predicted terrible or near-terrible things happening to humanity because of Artificial Intelligence: HAL 9000 in 2001: A Space Odyssey, the WOPR in WarGames, Skynet in the Terminator movies, and The Matrix in The Matrix. In all of these films, humans invent the very machines that end up trying to destroy them.
And now . . . it’s here.
I think the official date when the world uttered a Keanu Reeves-esque, “Whoa!” was when ChatGPT was announced back on November 30, 2022. But in reality, AI has been making its way into our lives for decades. Unlike Google and other search engines, which find websites related to your search, ChatGPT is a chatbot that answers your questions or responds to your requests in well-crafted prose, using all of the information that was on the Internet. It’s incredible, really.
I’ve used ChatGPT for work a few times. I’ve asked it to give me ten arguments I can use for a question I have about an issue, such as favoring or opposing charter schools. Fifteen seconds later, I am reading a well-researched and probably accurate response that would have taken me well over an hour to research and write myself. It’s not a finished product, but it’s a great starting point.
It usually takes me about 10-12 hours to write one of these posts. I asked ChatGPT to write a 1,000-word post on artificial intelligence, using my writing style on www.drmdmatthews.com. Fifteen seconds later, I had a post. I’m not sure it used my style, and unlike me, it kept the post to well under 800 words, but it wasn’t bad at all. I have a link to it at the end of this post, should you want to compare. I fully expect some of my “friends” to tell me to keep using ChatGPT – adding that reading the AI-generated post was way more enjoyable than my 12-hour effort. I have wonderful friends.
Bill Gates, who over the years has morphed into someone even wiser and more intelligent than he was when he dropped out of Harvard to change the world, sees a whole lot of good coming out of OpenAI (Microsoft’s version of ChatGPT) and artificial intelligence in general. In a recent article, he looks forward to AI being a “co-pilot” or a “digital personal assistant” to those who take advantage of its power. He is confident that AI will personalize education to a higher degree than ever before. He does believe that we need to be careful and that regulation is needed, but he is far more excited about the potential benefits than he is about the danger to humanity.
Going back to the big screen, movies also present artificial intelligence models that showcase this benevolent side of AI. The most famous is C3PO in the Star Wars sagas, invented by (spoiler alert) Darth Vader as a child. But my favorite is Jarvis, Tony Stark’s AI unit that plays a major role in the Avenger series. In one of the films, Tony Stark (Ironman) predicted Bill Gates’ terminology with his “Jarvis is my Co-Pilot” bumper sticker. It makes sense. Superheroes’ vision should be way ahead of mere mortals like Bill Gates. C3PO and Jarvis represent the AI we all want – the brilliant and lightning fast co-pilot that can improve our lives.
If I were a new teacher, I would love to use ChatGPT. I would ask my teaching co-pilot to give me five different lesson plans for how to teach the causes of the Civil War, and make sure to give me options for any readings for students who might benefit from a higher or lower reading level. Boom! It’s there. What a great starting point for lesson planning!
The US Department of Education has published its first report on the potential and risks of artificial intelligence. It agrees with Gates that there is potential for amazing good for students and teachers. Personalized tutoring possibilities hold great promise. But the report warns of potential risks to privacy and the danger of unexpected and unintended consequences.
This is no longer a question of whether or not we should use AI. The cat is out of the bag. Some superintendents are estimating that at least 75% of high school students are already using it. Motivated students (and even adults) can have a personal tutor helping them to learn anything they want. Teachers are worried (with good reason) that students will use AI to do all of their homework. As homework is a highly overrated tool for learning, I’m not too worried about that. But as a teacher, I believe that one of the most important skills that I taught was analytical writing, using research and evaluation of data. Show me a student who can write a well-crafted argument on a historical issue, and I’ll show you a person who has the skills to thrive in this world. Going forward, it will be very difficult for teachers to determine whether students or AI wrote an essay. One solution to that is to have all essays be written in-class, so teachers know that students actually wrote them. I look forward to seeing how teachers and educational leaders address this issue in the coming years.
But beware of letting AI solve our all-too-human work challenges. Back when I was working long days as an educator, one of my least favorite things was receiving an email that was wayyyyyyy too long. I didn’t have time for a long email, and often the tone of the email was harsh, unkind, and hard to get through. If I wanted, I could now copy the email into ChatGPT, ask it to give me a 100-word summary, and tell it to draft a 100-word response. Super time saver, right? But it’s a bad idea. Those privacy statements we all AGREE to without reading the gazillion words allow ChatGPT and other AI to take everything you enter – every bit of information you provide – and make it part of the AI database to be used in future work. There is no privacy. We all need to be extraordinarily careful.
My son Dawson is a computer science major. AI can now write code far faster than he ever could. He’s really smart, actually, so maybe he could keep up, but I actually doubt that any human can. Here’s the thing – it is essential for humans to know what that computer code says, because when something goes wrong, smart human beings have to fix it. There are many who believe that not only will AI be able to program, but that it will eventually adapt and improve the code – and when that happens, humans will have no idea what’s going on. While Dawson thinks that is a far-fetched idea, I still say that it is a potential problem.
AI is too powerful not to use. But . . . danger lurks. Last week’s 60 Minutes episode began with a segment on Geoffrey Hinton, who is considered by most to be the one of the inventors of artificial intelligence. He is incredibly excited about the breakthroughs that AI will achieve in the areas of medicine, clean energy, and so much more. Yet, he is worried. He believes that AI is already capable of more intelligence, at least in terms of how fast it can learn and the unique strategies it can develop, than human beings, and that the day will come when it becomes self-aware – think HAL and Skynet. 60 Minutes made the comparison to J. Robert Oppenheimer, one of the inventors of the atomic bomb, in that Hinton is now warning against the improper and unregulated use of his invention, saying that the risks may far outweigh the benefits.
He’s not alone. I listened to an outstanding podcast from Ezra Klein from the New York Times where he interviewed Demis Hassabis, the 46-year-old chief executive of Google DeepMInd, and the lead on a project called AlphaFold, which is mapping every protein known to humans. The medical possibilities are incredible. It’s a spectacular interview, where both explain the research in terms that even I can understand. I was also fascinated by Hassabis’s path, from gamer, to game inventor, to AI world leader, and the commonality of games in all of it.
For all of its potential, some of it already realized, Hassabis is also issuing warnings about AI. He warned, “I would advocate not moving fast and breaking things,” which is a direct refute of Mark Zuckerberg’s “Move fast and break things” motto at FaceBook. More ominously, in a recent one-sentence statement with other tech leaders, he warned, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Yeah, we should definitely try to avoid human extinction.
All of this makes teaching more important than ever before. We need to develop highly educated citizens who can double check what AI produces. We need human beings who we can rely on for truth and know-how. We need fact-checkers who can counter the false or fake information that AI can and will create in written, photo, and video form. And we need human beings who still pursue learning how to think, learning how to create, and learning how to collaboratively problem-solve. The human side of teaching matters more than ever.
There’s so much that needs to happen. Students need to use AI to learn, not just to complete assignments. Privacy needs to be regulated more than ever before. Our youth who are studying computer science, like my son Dawson, need to stay ahead of AI and be able to truly understand what it is doing. Companies need regulation on how they are using AI. And most of all, we humans need to stay in charge. A co-pilot or digital personal assistant could be helpful to all of us, and the science breakthroughs could make life better for everyone on earth.
But let’s be careful. None of us want to be in a movie with a tragic ending.
To get updates on when my next post comes out, please click here.
Post #93 on www.drmdmatthews.com
———-
Good reading for a Saturday morning with a cup of tea, outside the patio. Great article, I lot to think about it! Thank you for all the links . I will definitely will be watching, listening g and reading.