Artificial Intelligence: Let’s Make Real Life Better than the Movies

October 14, 2023

I probably watch too many movies. That being said, a lot of movies have predicted terrible or near-terrible things happening to humanity because of Artificial Intelligence: HAL 9000 in 2001: A Space Odyssey, the WOPR in WarGames, Skynet in the Terminator movies, and The Matrix in The Matrix. In all of these films, humans invent the very machines that end up trying to destroy them.

And now . . . it’s here. 

I think the official date when the world uttered a Keanu Reeves-esque, “Whoa!” was when ChatGPT was announced back on November 30, 2022. But in reality, AI has been making its way into our lives for decades. Unlike Google and other search engines, which find websites related to your search, ChatGPT is a chatbot that answers your questions or responds to your requests in well-crafted prose, using all of the information that was on the Internet. It’s incredible, really. 

I’ve used ChatGPT for work a few times. I’ve asked it to give me ten arguments I can use for a question I have about an issue, such as favoring or opposing charter schools. Fifteen seconds later, I am reading a well-researched and probably accurate response that would have taken me well over an hour to research and write myself. It’s not a finished product, but it’s a great starting point.

It usually takes me about 10-12 hours to write one of these posts. I asked ChatGPT to write a 1,000-word post on artificial intelligence, using my writing style on www.drmdmatthews.com. Fifteen seconds later, I had a post. I’m not sure it used my style, and unlike me, it kept the post to well under 800 words, but it wasn’t bad at all. I have a link to it at the end of this post, should you want to compare. I fully expect some of my “friends” to tell me to keep using ChatGPT – adding that reading the AI-generated post was way more enjoyable than my 12-hour effort. I have wonderful friends.

Bill Gates, who over the years has morphed into someone even wiser and more intelligent than he was when he dropped out of Harvard to change the world, sees a whole lot of good coming out of OpenAI (Microsoft’s version of ChatGPT) and artificial intelligence in general. In a recent article, he looks forward to AI being a “co-pilot” or a “digital personal assistant” to those who take advantage of its power. He is confident that AI will personalize education to a higher degree than ever before. He does believe that we need to be careful and that regulation is needed, but he is far more excited about the potential benefits than he is about the danger to humanity.

Going back to the big screen, movies also present artificial intelligence models that showcase this benevolent side of AI. The most famous is C3PO in the Star Wars sagas, invented by (spoiler alert) Darth Vader as a child. But my favorite is Jarvis, Tony Stark’s AI unit that plays a major role in the Avenger series. In one of the films, Tony Stark (Ironman) predicted Bill Gates’ terminology with his “Jarvis is my Co-Pilot” bumper sticker. It makes sense. Superheroes’ vision should be way ahead of mere mortals like Bill Gates. C3PO and Jarvis represent the AI we all want – the brilliant and lightning fast co-pilot that can improve our lives.

If I were a new teacher, I would love to use ChatGPT. I would ask my teaching co-pilot to give me five different lesson plans for how to teach the causes of the Civil War, and make sure to give me options for any readings for students who might benefit from a higher or lower reading level. Boom! It’s there. What a great starting point for lesson planning!

The US Department of Education has published its first report on the potential and risks of artificial intelligence. It agrees with Gates that there is potential for amazing good for students and teachers. Personalized tutoring possibilities hold great promise. But the report warns of potential risks to privacy and the danger of unexpected and unintended consequences.

This is no longer a question of whether or not we should use AI. The cat is out of the bag. Some superintendents are estimating that at least 75% of high school students are already using it.  Motivated students (and even adults) can have a personal tutor helping them to learn anything they want. Teachers are worried (with good reason) that students will use AI to do all of their homework. As homework is a highly overrated tool for learning, I’m not too worried about that. But as a teacher, I believe that one of the most important skills that I taught was analytical writing, using research and evaluation of data. Show me a student who can write a well-crafted argument on a historical issue, and I’ll show you a person who has the skills to thrive in this world. Going forward, it will be very difficult for teachers to determine whether students or AI wrote an essay. One solution to that is to have all essays be written in-class, so teachers know that students actually wrote them. I look forward to seeing how teachers and educational leaders address this issue in the coming years.

But beware of letting AI solve our all-too-human work challenges. Back when I was working long days as an educator, one of my least favorite things was receiving an email that was wayyyyyyy too long. I didn’t have time for a long email, and often the tone of the email was harsh, unkind, and hard to get through. If I wanted, I could now copy the email into ChatGPT, ask it to give me a 100-word summary, and tell it to draft a 100-word response. Super time saver, right? But it’s a bad idea. Those privacy statements we all AGREE to without reading the gazillion words allow ChatGPT and other AI to take everything you enter – every bit of information you provide – and make it part of the AI database to be used in future work. There is no privacy. We all need to be extraordinarily careful.

My son Dawson is a computer science major. AI can now write code far faster than he ever could. He’s really smart, actually, so maybe he could keep up, but I actually doubt that any human can. Here’s the thing – it is essential for humans to know what that computer code says, because when something goes wrong, smart human beings have to fix it. There are many who believe that not only will AI be able to program, but that it will eventually adapt and improve the code – and when that happens, humans will have no idea what’s going on. While Dawson thinks that is a far-fetched idea, I still say that it is a potential problem.

AI is too powerful not to use. But . . .  danger lurks. Last week’s 60 Minutes episode began with a segment on Geoffrey Hinton, who is considered by most to be the one of the inventors of artificial intelligence. He is incredibly excited about the breakthroughs that AI will achieve in the areas of medicine, clean energy, and so much more. Yet, he is worried. He believes that AI is already capable of more intelligence, at least in terms of how fast it can learn and the unique strategies it can develop, than human beings, and that the day will come when it becomes self-aware – think HAL and Skynet. 60 Minutes made the comparison to J. Robert Oppenheimer, one of the inventors of the atomic bomb, in that Hinton is now warning against the improper and unregulated use of his invention, saying that the risks may far outweigh the benefits.

He’s not alone. I listened to an outstanding podcast from Ezra Klein from the New York Times where he interviewed Demis Hassabis, the 46-year-old chief executive of Google DeepMInd, and the lead on a project called AlphaFold, which is mapping every protein known to humans. The medical possibilities are incredible. It’s a spectacular interview, where both explain the research in terms that even I can understand. I was also fascinated by Hassabis’s path, from gamer, to game inventor, to AI world leader, and the commonality of games in all of it. 

For all of its potential, some of it already realized, Hassabis is also issuing warnings about AI. He warned, “I would advocate not moving fast and breaking things,” which is a direct refute of Mark Zuckerberg’s “Move fast and break things” motto at FaceBook. More ominously, in a recent one-sentence statement with other tech leaders, he warned, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

Yeah, we should definitely try to avoid human extinction. 

All of this makes teaching more important than ever before. We need to develop highly educated citizens who can double check what AI produces. We need human beings who we can rely on for truth and know-how. We need fact-checkers who can counter the false or fake information that AI can and will create in written, photo, and video form. And we need human beings who still pursue learning how to think, learning how to create, and learning how to collaboratively problem-solve. The human side of teaching matters more than ever.

There’s so much that needs to happen. Students need to use AI to learn, not just to complete assignments. Privacy needs to be regulated more than ever before. Our youth who are studying computer science, like my son Dawson, need to stay ahead of AI and be able to truly understand what it is doing. Companies need regulation on how they are using AI. And most of all, we humans need to stay in charge. A co-pilot or digital personal assistant could be helpful to all of us, and the science breakthroughs could make life better for everyone on earth. 

But let’s be careful. None of us want to be in a movie with a tragic ending.

To get updates on when my next post comes out, please click here.

Post #93 on www.drmdmatthews.com

———-

Here is the post that ChatGPT wrote if you want to see it. 

Image by Geralt on Pixabay 


Sign up to my newsletter.

12 Comments

  1. Siugen Constanza says:

    Good reading for a Saturday morning with a cup of tea, outside the patio. Great article, I lot to think about it! Thank you for all the links . I will definitely will be watching, listening g and reading.

    1. Mike Matthews says:

      One of the amazing AI inventions I have seen is the video of a person speaking in one language, and then the AI-created video where the person is not only speaking a different language, but their lips and mouth are moving as if they are actually using that language. Even more amazing, and certainly worse, the video may eventually be creating that “speech” entirely, but it certainly looks like the person is doing it. Thanks for reading and commenting, my friend!

  2. Pau Grisanti says:

    My favorite cautionary tale about AI was published in the LA Times and the Wall Street Journal in the last 2 months. It involves an attorney who filed an AI written brief with references to cases that did not exist without trying to look up the fictitious cases. The Judge was more diligent and the penalties for trying to mislead the court were heavy in addition to the damage to the attorney’s reputation.

    1. Mike Matthews says:

      Thanks, Paul! Yes, my attorney son and I discussed the poor decision-making that led to that moment. AI is not the pilot. And if it were, attorneys might be the first to go. I’m just kidding. But seriously, they might. As much as I dislike attorneys pursuing frivolous claims or excessive results, I am very appreciative of the attorneys who defend against thosefalse claims and help reach reasonable settlements when there is a problem.

  3. Bill Sampson says:

    The burgeoning of AI has forced me to broaden my thus far fruitless search for intelligent life. That said, perhaps Andrew Friedman should become conversant with Chat GPT and spare me yet another October heartbreak.

    I’m only being a little facetious – I think.

    Bill

    1. Mike Matthews says:

      I can see the passion and twinkle in your eye, Mr. Sampson. Sadly, I don’t think that ChatGTP or AI hold the answers for the Dodger faithful. We’ll get ’em next year.

  4. Mikke Pierson says:

    I asked Chat GPT to write a witty one paragraph response to your blog:

    Movies have indeed shaped our AI anxieties, with fears of HAL 9000, Skynet, and rogue machines haunting our screens. But guess what, folks? The future is here, and it’s called ChatGPT. Sure, it’s no Ironman’s Jarvis, and it might steal your email secrets if you let it, but it’s also a game-changer. Mike Matthews’ post highlights the potential and pitfalls of AI, from supercharging students to potentially replacing humans as code wizards. It’s a brave new world, and as we navigate it, remember that teaching and human intellect still hold the fort. So, let’s embrace AI, but like any blockbuster, let’s ensure it’s a story with a happy ending, not a dystopian epic. After all, nobody wants to star in a movie with a tragic finale!

    1. Mike Matthews says:

      Mikke – That is a spectacular response. You gave a great prompt for ChatGPT, and it delivered. Once again (like it did when I asked it to write my prompt for me), it created a pretty self-serving response. “Let’s embrace AI . . .” It’s on point, well-written, self-serving, and disturbingly good. The response has me thinking even more, which is something you are pretty darn good at. Thanks!

  5. Pat Matthews says:

    Mike,
    Thank you for this one. Fascinating world we are in.
    Here’s CHATGPT’s response:
    Hey Mike,
    I just read your blog post on AI, and I absolutely loved it! Your insights were fascinating, and I learned so much from your perspective. It’s amazing to see how passionate you are about this topic, and your writing really made the subject come to life for me. Keep up the fantastic work! Looking forward to reading more of your posts.

    Best, Pat

    From Me:
    Thanks for this one! I feel extremely blessed to be in this world with all the technological advances. It is amazing and I sincerely hope our world can use this to better human life.
    I’m looking out for an AI program for architects and artists.
    Love,
    Pat

  6. Matt Kauffman says:

    Good take! I think you’ve got the right perspective on AI. I like the movie references. How AI plays out is likely pivotal to humanities’ future. I’ve been thinking about the outcomes as Mad Max (societal upheaval leads to a new dark age) vs Terminator (the machines take over) vs Star Trek (we harness limitless productivity for good). I tend towards optimism and historically technical advancement has generally resulted in a better world for most people, so I think we have a good shot at Star Trek future, but fingers crossed.

  7. Daniel Wren says:

    Mike
    If people doubt that AI is already effecting their lives and maybe not for the better then I would suggest you do the following. Ask everyone to open Google on their phones and search images of a baby peacock. When they all have their results, point out that the beautiful pictures of a long feathered blue colored bird is AI generated.
    Google has a problem because they basically just scrape information posted on the web.
    With AI it will be easier to create fake images and fake but believable posts. Currently, Google can’t recognize the difference.
    Way back in college I had a professor who was talking about using computers to speed up research. He used the term “GIGO” Garbage In Garbage Out. This will apply to AI as well.
    Our classmate Keith Woeltje is working on this issue to keep a fence around sound medical science and unvented medical information so that AI will be a reliable tool.

    1. Mike Matthews says:

      Well . . . Keith Woeltje was always the smartest of all of us. It gives me a little bit of solace to know that he and others like him are taking this task on. Your are spot on about the dangers. We are all smart to keep our guard up. Thanks for reading and contributing.

Comments are closed.