AI, The Future, and What it Means to be Human

Someone recently forwarded me this Podcast Episode on music and the emergence of A.I, reflecting on it from a philosophical perspective. It hit on some things I’ve been observing and thinking about lately.

The discussion relating to music is, of course, not dissimilar to discussions going on in film. A first point of interest from the podcast episode is where it reflects on an increasingly AI saturated future driven not by concerns about artistic merit, but rather copyrights and lawsuits. What is currently not owned will become owned, and what is currently a benign playground for select groups (think podcasters who need intro music, or social media influencers needing material) will gradually shift to corporations as copyrights start to adjust so as to consider distinctives like tonal inflection and appearance.

And here is what is interesting. The future is being paved by generations who have been trained to see art itself as content. In theory, the questions that concern those driving the discussions today regarding the concerns of AI will continue to shift. Its worth asking in what ways, what questions are worth preserving, and which are no longer or will no longer be relevant. And to recognize that we ask this in the midst of some undefinable fears and anxieties that are both uniform in their expression and culturally divisive (think non religious and religous responses).

It is not unusual to find such fears and anxieties surface in relationship to a new technology. In some ways the emergence of chatgpt is simply repeating the past in terms of new technology that we don’t quite understand. But if the rise of the smartphone has taught us anything it’s that this more than simply a matter of generational change. It’s a fundamental change in how we relate to information, change and human function. The line between humans as the driving force of technology and technology driving human function is and has been blurred.

What I have noticed from this latest iteration is the following:
People having these conversations tend to be quickly dismissive of AI as not being human but rather dependent on humanity activity and the technology’s creators. I think this remains ignorant though of just how entrenched technology is within the human experience. It reeks of an appeal to blind human exceptionalism, and what’s ironic about that is that so many of these voices would readily dismiss something like religion on the basis of its view of humanity being made in the image of God. Yet here we find responses to AI being made on the basis this same concern for image. It cannot be taken seriously, they say, because it is not human.

The fear then seems to be the implications of AI being more human than we are comfortable with. But lets ask the question, what makes AI less than human, or what makes humans more human? That might might be boiled down to the following;
– the ability to respond to ones environment. Biologically speaking, human distinction emerges from this responsiveness on a cognitive level, but if consciousness is understood to be boiled down to a product of material function, there is no reason to believe that AI could not eventually replicate this

– the ability to experience suffering. Although all life forms do, humanity is understood to experience it on the level of awareness. And yet what this again bypasses is the biological root of suffering. If suffering is merely about awareness of adversity (we are aware, therefore we feel pain), then awareness can be contextualized within the experience of AI. Also completely bypassed in these discussions is the fact that so much of this technology (and medication) is geared towards eliminating suffering by affecting our awareness of it.

– togetherness. This is a point the podcast brings up, and it remains the most compelling point. Its not about whether AI can create genuinely interesting art- it can and it will, and it already has been doing so within popular art- it is whether it can, or whether it can even desire to, participate in community. In truth, people aren’t dismissive of AI making art based on what it can and will be able to contribute. They are dismissive because they are aware that something is AI.

But here is the thing. Many of people know the mechanics of a materialistic approach to human function. On that basis there is no reason to fear AI. And yet these same people would be deeply affected by losing that sense of human exceptionalism. Could it be that the same sort of willful ignorance that allows them to still live in the face of the facts about what being human actually is and means is the willful ignorance they are employing to deal with AI? And what if we applied that to a world shaped by smart phones? What happens when they can no longer ignore the facts about AIs continued emergence? Will this result in another crisis of meaning as we’ve seen become so pervasive in the realm of philosophy and phsychology in the last 20 years?

Published by davetcourt

I am a 40 something Canadian with a passion for theology, film, reading writing and travel.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: