If I ask you to think of a philosopher, who comes to mind? Is it Confucius? Plato? Socrates? Or do you think a little nearer term - Kant? Emerson? Nietzsche? What about the students and faculty of Central Michigan University? Your neighbor? Your boss?
Central Michigan University professor Matt Katz says that’s one of the things he wishes more people understood about philosophy: we’re all doing it every day, and the questions professional philosophers consider affect us all.
“One thing we want people to know is that the questions we ask are relevant to everybody. People are doing philosophy all day in their own lives," says Katz.
“No, we’re not teaching students how to program a computer or something like that, but the things we talk about are important to everyday life and how we interact with people around us. In philosophy what students gain from class goes beyond raw knowledge to portable skills.” Katz says his students learn more effective communication strategies, how to weigh ideas and information, and analyze theories and look for their value to everyday life.
Along those lines, Katz recently asked his students to consider an application that has gained a lot of press in the past few years for its both its benefits and consequences: artificial intelligence (AI).
Each year the college chooses a critical engagement theme that allows them to take up a single question or topic and attack it from all angles through different classes and student groups. For the 2018-19 session that topic was the end of the world as we know it and Katz says he tied in AI by asking students to consider whether or not it could spell the end of humanity. That it could be the greatest danger humans have faces yet is, after all, an opinion of some of today’s most famous tech minds.
Infamous Tesla CEO Elon Musk has long been one of the loudest and most critical voices among that crowd. At least year’s massive SXSW event he warned that the technology could be more dangerous than nuclear warheads, and called for tight regulation. Just last week, the non-profit AI research firm he backs, Open-AI, announced that it had completed successful research on a system that can produce text-based deepfakes - news articles and works of fiction so convincingly composed any reader would mistake it for having been penned by humans. In the announcement, the firm took the extraordinary step of saying it won’t release the research publicly because of the danger it poses to humans.
Meanwhile, here at home, Katz and his students took up the topic during the final segment of his Philosophy of Psychology class. And they were rigorous in their approach, delving into everything from what we mean when we talk about AI, to whether or not it is realistic to expect AI to pose a threat to us — or, alternatively, to offer any great benefit.
The result was an enthusiastic foray into algorithms, computers that have the capacity to learn, and the various consequences and benefits that arise when humans try to get a technology to behave as a human would behave.
“Right now, we don’t have devices that can do all the things humans can do altogether, they can only do individual tasks - like Alexa,” he explained, citing Amazon’s well-known home assistant and listening speaker. So Katz wanted his students to consider not just what AI is now, but what it might be in the future.
Will we ever have one device that can do everything? What benefits and dangers would come with that? If there are dangers, are there ways we can limit those dangers without forfeiting the benefits?
At the end of the semester Katz says the class came away with a healthy mix of perceptions about AI and its role in the human world. “Some came away more worried, some less worried, but they were all interested and engaged. They had fun and when they’re having fun, I’m having fun,” says Katz, “They go hand-in-hand.”
Fun may not be the most common measure of success for a college course, but it’s important to Katz and his students, who are more likely to dig deeper and keep coming back to consider more questions when they’re engaged and enjoying their work.
“Every new technology comes with questions about whether or not we should use it and how,” says Katz. Whether it’s a self-driving car or a speaker you can control with your voice, there are both practical and ethical questions that arise with the use of any technology. And that has been the case for as long as humans have been inventing new ways to do things.
Katz says when we think about our answers to those questions, that’s philosophy. Some of us do it only personally, others professionally, but the act of contemplating answers to life’s questions - big and small - is important. It’s how we figure out how to develop things in a way that’s safe and beneficial.
There’s a viral tweet making the rounds on Twitter this month that asks, “Does philosophy make progress?” And then answers, cheekily, "Of course! We don't understand far more than the Greeks ever could have imagined not understanding.”
Maybe the punchline is a bit on the nose, but it’s a sentiment we can all appreciate. No matter the question life and innovation throws our way, the answers we arrive at through careful consideration will always be complex and often opposing. That’s okay. The important part is that we keep asking.
Enjoy this story?
Sign up for free solutions-based reporting in your inbox each week.