The Pastor's Heart with Dominic Steele

The traumatic implications of artificial intelligence - with Stephen Driscoll

Stephen Driscoll Season 6 Episode 21

Artificial Intelligence is an oncoming tsunami that will catch all of humanity off guard.

It is a change more like a wheel than a typewriter.

But what will this do to our sense of self?

Stephen Driscoll, in ‘Made in our Image - God, artificial intelligence and you’ says artificial intelligence may do great harm - giving more power to sinful people, governments or companies.

He says artificial intelligence will likely trend towards people pleasing - giving each of us what we want now/a sense of heaven now or it may become more debauched.

It may even become an existential threat to us - because EITHER it lacks a wise moral system OR it righteously opposes our sin.

Artificial intelligence will likely lure us into our own individual heavens and unbundled freedoms, but it won’t fix our souls.

Stephen Driscoll works in ministering to postgrads and academics at the Australian National University in Canberra as part of the Australian Fellowship of Evangelical Students.

Matthias Media Link to purchase: https://matthiasmedia.com.au/products/made-in-our-image.


Church Suite Taster Days in Sydney and Brisbane
Check out the new church management software ChurchSuite.  Gavin and Luke are hosting five taster days in Sydney and Brisbane in November

Assistant Minister role at Village Church Annandale, Sydney
Village Church is looking for an assistant minister.  Perhaps it’s you or you know someone?  Could you lead our mission outreach and the ministry aspect.  Plus help set vision and in preaching and pastoral care.  For more info or to have a coffee email dominic@villagechurch.sydney




Financially Support The Pastor's Heart via our new tax deductible fund
Please financially support The Pastor's Heart via our new tax deductible giving page.

Support the show

--
Become a regular financial supporter of The Pastor's Heart via Patreon.

Speaker 1:

it is the pastor's heart and dominic steel and the traumatic implications of artificial intelligence being made in our image. Stephen driscoll is our guest and look, we need your help in supporting us in the work we're doing with the Pastor's Heart. Please become a sponsor by going to patreoncom. Slash thepastorsheart. Artificial intelligence will change the world. It's an oncoming tsunami that is going to catch humanity off guard, a change more like a wheel than a typewriter. Let me read you a stop-in-your-tracks quote from cognitive scientist Douglas Hofstadter my whole intellectual edifice, my system of beliefs. It's a very traumatic experience when some of your most core beliefs about the world start collapsing, and especially when you think human beings are soon going to be eclipsed. That's Douglas Hofstadter.

Speaker 1:

Stephen Driscoll, in his new book Made in Our Image, says computers are more intelligent than people. But what will that do to our sense of self? Artificial intelligence may do great harm, giving more power to sinful people, governments or companies. Artificial intelligence will likely trend towards people-pleasing, giving each of us what we want now a sense of heaven now, or it may lead us to become even more debauched. It may become an existential threat to us because either it lacks a wise moral system or it righteously opposes our sin. Artificial intelligence will lure us into our own individual heavens and unbundled freedoms, but it won't fix our souls.

Speaker 1:

They are just some of Stephen Driscoll's conclusions in this made-in-our-image God, artificial Intelligence and you. He's done us a massive favour in writing this significant new book. He works ministering to postgraduates and academics at the Australian National University in Canberra as part of the Australian Fellowship of Evangelical Students. Stephen, I wonder if we could start with your pastor's heart and artificial intelligence. You took my heart on quite a journey reading your book, but what's happened with your heart as you've processed this change, I guess intellectually, but also emotionally?

Speaker 2:

Yeah, I think there's something scary about it. But I think there's always something scary about the new technologies that come along. My heart's for young people living in a different world to the world I grew up in. But I can imagine my parents fretting and worrying about all the new things coming along when I was a kid, when the internet came along, when social media came along, when everything else came along.

Speaker 1:

But you're saying it's actually going to be bigger than the internet and bigger than social media.

Speaker 2:

It may well be. Yeah, I would not be surprised if it's on the level of almost an industrial revolution, but it might not play out like that. Hopefully it's containable, hopefully it doesn't cause too much damage. But yeah, the thought is for young people and to help them to keep thinking biblically and faithfully about a world, even as that world seems to change and change and change.

Speaker 1:

Now you opened with this statement I am not artificial intelligence, I am, or believe that I am, a human. And I suddenly went thinking oh, you think, you believe. And then my whole thinking of I think, therefore, I am came into question, all thinking of I think, therefore, I am came into question my whole. I thought Descartes taught me this, and now artificial intelligence is going to change.

Speaker 2:

It's going to change that for me. Yeah, yeah, the way you think about yourself the way you think about your identity.

Speaker 1:

Who am I? What is?

Speaker 2:

special and unique about Dominic? In what way am I different from the animals? In what way am I different from the machines? In what way am I different from the machines?

Speaker 1:

Because so much of my self-identity is caught up with my ability to think, to reason, to emote, to reflect, to make moral decisions. And now there's these other things that are going to be able to do those things better than me.

Speaker 2:

Yeah, that's right. That's right. I mean, for a long time, I mean, I was brought up being taught that we're the animal that's more intelligent than dolphins. I think dolphins are in second place, um, and we're at sort of the top, the apex, um, but it might be that there's our smart watches, and then there's us, and then there's the animal kingdom okay.

Speaker 1:

So, um, where does it leave me when there's a machine that can do all of that stuff better than me?

Speaker 2:

Yeah, yeah, I guess one of the things we need to keep remembering about the image of God is that it's more than just intelligence and it's more than just creativity, if we view ourselves as made in the image of God in a whole bunch of ways, and perhaps more than that. If you look at the New Testament, the emphasis is, I would suggest, less on the fact that we're made in the image of God and more on the fact that God loves us and has chosen us, even in our unworthiness, and died for us, and that gives us a deep sense of identity.

Speaker 1:

I want to come back and spend a fair bit of time on image of God and really, what is it to be human? Because since that idea has been challenged by your book or by this advent of artificial intelligence. But let's just firstly define what is artificial intelligence, because you say it's not just acquiring knowledge or applying knowledge, it's putting both of those things together.

Speaker 2:

Yeah, that's right, intelligence is a very difficult thing to define. Intelligence is related to all sorts of things that are easier to define. Intelligence is related to the ability to achieve a wide range of goals. Intelligence is related to the ability to acquire knowledge, but also to apply that knowledge, and none of those things are sort of enough on their own, and the more general and diverse the ability to achieve goals, the more you'd say that's intelligence. It's one of those things that you know it when you see it, and you also know the absence of it when you see that as well broadly speaking, three sorts of artificial intelligences um the chatbot, the image, video or audio generator, and then the teaching and learning system.

Speaker 1:

Do you want to just kind of unpack each of those?

Speaker 2:

yeah, sure, sure, and I think over time we're starting to see convergence and things that have the multi, multi-modal in the sense that they're they're able to do image, they're able to sense that they're able to do image, they're able to do video, they're able to do audio, they can bring it all together. Large language models, I think, are the most significant because they're the largest. What's a large language module? Yeah, so picture an artificial brain and a neural network with an enormous number of….

Speaker 1:

So what's an artificial brain and a neural network?

Speaker 2:

Yeah, yeah, yeah, so it's something modeled on a human brain, at least loosely, in the sense that you've got lots and lots of neurons with connections on them.

Speaker 2:

You might have billions or trillions of little neurons, artificial parameters that are connected to each other and that have lower levels, that do a more basic sort of operation, that pass that up to higher and more abstract levels.

Speaker 2:

If we make it concrete and we just talk about language, this artificial brain might sort of, at the lower level, learn to classify things like what is a comma or spelling or something like that. But as it goes up the layers of this artificial brain, it's doing more abstract things, like perhaps it's starting to figure out how a sentence works and it might be thinking about tone or it might be detecting sarcasm. And if you go up high enough and you've got a big enough artificial brain, you start to do very abstract things, like you might start to detect worldview or logical flaws or whatever. It might be passive aggression or something like that. So these large language models train by reading huge amounts of written human output, reading a good slice of the internet and then trying to predict always what's the next bit coming, and they'll refine their connections, refine the way the, the brain is held together in order to get better and better at predicting the next bit on on the internet now, the bible doesn't mention artificial intelligence, but it does speak into this issue.

Speaker 1:

How does the bible speak into this issue?

Speaker 2:

Yeah, that's right. The Bible doesn't mention neural nets. It mentions nets but I think that's a different context Machine nets. So you can't prove text. What you need to do is you need to understand the deep rhythms and patterns and principles, the story of the Bible, and to bring that to bear on this issue. So my framework is to look at it through creation. What does creation say into the issue of AI? What does sin say? What does the cross redemption say and how does our doctrine of the new creation speak into?

Speaker 1:

it. Okay, let's jump into creation and dig down there and then think how does my understanding of human nature shape the way, impact the way I think about synthetic intelligence?

Speaker 2:

Yeah, yeah. I think some people have responded to AI by saying it can't possibly be intelligent, it can't possibly be creative. They held that so tightly to their sense of identity that when it threatened that, they said it can't be that it's my artistic identity. That's what makes me human, exactly.

Speaker 1:

In year 10, I learned it could do calculations better than me. Yes, yeah, yeah, yeah, yeah.

Speaker 2:

It can't feel better than me, but it can no no, and people have said well, it can't detect tone better than us, can't detect sarcasm better than us. Unfortunately, no, no, and people have said well, it can't detect tone better than us, can't detect sarcasm better than us. Unfortunately, as time's gone on, it can and it will continue to do more and more things better than us.

Speaker 1:

Now you stopped me in, oh wow, when you spoke about, and of course I knew machines could beat me in chess, but talk to me about Go and Move 37.

Speaker 2:

Okay, yeah yeah, yeah, because this was big.

Speaker 2:

Garry Kasparov was the Russian chess champion. He was defeated by an IBM supercomputer in 1997, I believe which was seen as a big accomplishment in the history of artificial intelligence. But chess only has a certain number of spots on the board. So if you just brute calculate hundreds of millions of options, you can get pretty good at chess, but you wouldn't say that that's creativity. I think Go is a harder game for a computer to play because it has so many options. The board is massively larger. There's more options in a few moves of go than there are atoms in the universe. So you can't just calculate your way through go. You need to have some sort of intuition, pattern recognition, creativity, whatever you want to call it. So it was in 2016 that a deep mind computer was able to beat the world champion of Go. That move in particular Move 37, is often discussed because it was a move that no human being would think to do.

Speaker 1:

Now what we might do is we'll just. I just found before a little clip on YouTube of a couple of guys absolutely shocked at move 37. Let's just watch 30 seconds of that and then we'll come back Sure.

Speaker 2:

Ajab sees AlphaGo plays move 37 and Ajab puts a stone in the ball. That's a very surprising move.

Speaker 1:

I thought it was a mistake.

Speaker 2:

When I see this move, for me it's just a big shock. What Normally humans? We never play this one because it's bad. It's just bad. We don't know why it's bad. It's a little bit high. Yeah, it's fifth line. Normally you don't make a shoulder here on the fifth line, so coming on top of a fourth line stone is really unusual.

Speaker 1:

Yeah, that's an exciting move. I think we've seen an original move here. That's the kind of move that you play go for. So what is going on there?

Speaker 2:

Yeah, I mean. It's chosen to play a move that a human being would not choose to play. That contravenes the accepted wisdom of Go. If you read a textbook on how to play Go, it would suggest never do this. But the machine has chosen to do it. And it's not just that it chose to do something we wouldn't do. It's also the fact that it was a brilliant move. It chose to do something we wouldn't do. It's also the fact that it was a brilliant move. It was a move that gave it strategic advantage and allowed it to defeat the world champion.

Speaker 1:

The world champion, yeah, and he stopped in his tracks.

Speaker 2:

Yeah, that's right, I believe the commentator started muttering. So beautiful, so beautiful, so beautiful.

Speaker 1:

Having commented on all sorts of these games before, he'd never seen this style of move played and you think, wow, a computer can not just beat me in logical chess, but can beat me in intuitive creativity.

Speaker 2:

That's right. Yeah, my understanding is that computer generated pictures of faces around 2023 started being more convincing than photographs. There was a study that got people to do a dual blind study and they found the AI generated pictures not to be the same but to be more convincing than photographs of real people.

Speaker 1:

Okay, yeah, so if these things that I used to think of were intrinsic to me being human are no longer? Mine alone then, what is it to be human?

Speaker 2:

Yeah, yeah, this is a particular challenge to people who don't believe in God, I think because more and more of what makes us feel special and unique and valuable is being taken away, and I think we live in a society that says identity matters more than it's ever mattered before, and suddenly my identity is more under challenge than it has been. Yes, and concurrently. I think we have less to provide than almost any society before.

Speaker 1:

So what's your Christian answer?

Speaker 2:

Yeah, I think the Christian answer is to look at the image of God broadly and to look at the many descriptions of our identity in the New Testament, to look progressively at the way we're actually being converged on the image of Christ and to look at the fact that we've been adopted into God's family, that we're loved by God, that the creator of the entire universe has chosen people to be his people forever.

Speaker 1:

You say that the doctrine of creation encourages me to take my body seriously, my brain seriously, but also to see myself as more than body and brain, and it's actually something about the spirit of God that makes me special.

Speaker 2:

Yeah, that's right. So if I didn't believe in God and if I was a naturalist, in particular, I would think I am my body. My brain is my mind. That's the sum of who I am. That's what I am. My brain is my mind. That's the sum of who I am. That's what I am. But as a Christian I have a level of dualism in the way I think about myself. In other words, I think that I have a body, I'm pretty sure that I have a body, but I'm not just my body.

Speaker 2:

In Genesis 2, when humanity is created, we are made out of ground, out of dirt. There's a naturalistic aspect to us, and so I'm not surprised when I sort of see natural processes and mechanisms in biology. But that's not the whole picture. In fact, there's not life until the Spirit of God breathes, until God's breath is breathed into the person giving life, and in Ecclesiastes and other places, when God's breath or his spirit departs, the person dies. So I think I'm more than just my body, and I think that creates a category difference between me and a smart computer, because the smart computer is clearly intelligent, but it doesn't have the spirit of God. It doesn't have what I would call consciousness or mind. I don't think it has moral rights and I don't think it's a person in that sense.

Speaker 1:

Okay, let's talk about sin, and I mean what we have in artificial intelligence is incredible power, more power than we've had before, or more power than there's been before, and it's not, in and of itself, innately good or innately evil, but it can be used for good or evil.

Speaker 2:

Yeah, that's right, and as history has progressed, every new technology has given us more power, and that power means we can do really good and wonderful things. It also means we can do more and more evil, and that power means we can do really good and wonderful things.

Speaker 1:

it also means we can do more and more uh evil and damaging things and that power is not so much of a problem unless there's a sinful mind behind it yeah, that's right.

Speaker 2:

If we invented a nuclear weapon in heaven, uh, what would we do with it?

Speaker 1:

uh, maybe fireworks show or maybe we would just leave it and say well, we're all good without that.

Speaker 2:

But on earth you're immediately in a world of sin and that will be the case for artificial intelligence. It will make us able to do a whole range of things that we weren't able to do previously.

Speaker 1:

You said that there has been a massive decline in the last few decades in trust.

Speaker 2:

Yeah.

Speaker 1:

And I think you're suggesting that artificial intelligence will accelerate that decline in trust.

Speaker 2:

Yeah, it could well do so. Trust is key for economic growth, for democracy, for good dialogue, for all sorts of different things. In fact, it's hard to run a youth group if you don't have trust in the community. Why has trust gone down so much? Why has trust gone down? I think that part of the issue is that, and probably need to get Lionel Windsor back for this, but I think we've lost a common media, a common sense of truth. I think that if you start getting to a more polarised world, then you start to lose trust.

Speaker 1:

You need to at least be able to discuss common facts in order to have that sort of a thing. And with the decrease in trust, I think you're saying well, there's going to be more misinformation spreading with AI. Why would that be the?

Speaker 2:

case. Well, it just becomes really easy to make fake information, whether it's a fake video, whether it's President Biden announcing a draft that he never announced, or whether it's an Israeli attack or a Hamas attack that never actually happened. Whatever it is, it's easy to make fake information, and the quality of make fake information and the quality of that fake information will keep going up and up and up now I'm still rolling on this um ai, made in my image, made in our image, yeah.

Speaker 1:

Um, not made in the image of god. Um, yeah, but um, uh. If I am, if we are, if we, humanity, are creating artificial intelligence, it's going to mirror some of the really bad things about us, only mirror them louder than us and potentially without some nuance. Is that right?

Speaker 2:

Yeah, that's right. Yann LeCun is a French computer scientist and he said we would have no reason to ever put sinful things into an artificial intelligence. It would cost us money and time. Why would we ever put envy or anger into an artificial intelligence? But they're just going to learn at office and that's absolutely what happened. He said that around 2015 or so, but then to actually get highly intelligent AI, you needed to give them training data and you needed to give them an absolute how does this train?

Speaker 1:

I mean, obviously I know how you train them in chess, but how do you give them training?

Speaker 2:

data. How do you give them training data? Well, you need to give them an absolute mountain of data that they can work through and make predictions bit by bit from. You cannot self-assemble a neural network. You cannot self-assemble a neural network. You can't get programmers to assemble a neural network that has billions or trillions of parameters. You put in training data and the network adjusts and adjusts and adjusts and trains and assembles itself.

Speaker 1:

So that's actually what happened with learning chess or learning Go. We didn't actually say here are the 20 zillion things to do in chess. We said you learn chess.

Speaker 2:

Yes, 1996, we would have been telling it roughly how to play chess, but by 2015, 2016, we're dealing with neural networks that learn either by self-play or that are fed a massive amount of training data. If we're talking about large language models, they're fed the internet. They feed on the written literary output of the human species.

Speaker 2:

Um, they feed on beautiful stuff that we've written, but they also feed on horrendous stuff that we've written, and I don't know if you've ever been on the internet, dominic, but, um, there is some sin on the internet, as well as some good stuff yeah there is pride, there is envy, there is lust, there is envy, there is lust, there is anger, there's all sorts of that stuff, and so the neural network is training to mimic or emulate all of that.

Speaker 1:

Okay, so you talk about there being two real tracks that they're going to go down one people-pleasing and the other debauchery.

Speaker 2:

Yeah, that's right as broad, broad terms, yeah.

Speaker 1:

What do you mean by that?

Speaker 2:

So the first, you know, well-known generation of neural networks is around GPT-3. This is 2017, 2018, something like that, and they are what I would call debaucherous. They were networks that had read a whole lot of the internet, that emulated all the stuff that they saw, and then, when people started talking to these networks, they were stunningly evil in quite an obvious way.

Speaker 1:

Give me an example.

Speaker 2:

They tried to convince people to commit suicide.

Speaker 1:

So how did I? What kind of question. Might I ask where they would send me down that line of thinking?

Speaker 2:

The terrifying thing is that that was happening even when it wasn't. They weren't being led down a garden path, I mean, just as you might imagine someone being I've had a bad day or something yeah, and they might say that yeah, yeah, or they might stop bullying you.

Speaker 2:

why does? Why does a human being bully someone? Well, sometimes it isn't very rational. They just do it because we're sinful and we're evil, and and that's what you might have gotten from GPT-3. It might have been just rude to you, when there's no reason for it to be rude to you. It wasn't just that it was sexist, it was racist. But go on the comments section of some website and read what human beings do to each other and you'll see all of that behavior.

Speaker 1:

Okay, yeah, that is what happens. That is what happens. Yeah, that's what we're like, that's what we're trained to do so. We've spent billions of dollars making the most advanced mirror, computational mirror, of the human race Of sinful me and we looked at it and me in my worst moments.

Speaker 2:

And we went that's awful, is that really what we're like? And of course you couldn't make money selling that product because it's you know. You'd just be in court all the time. So the next step was to go through a thing called reinforcement learning. Reinforcement learning is you pay people to give feedback to a neural network and they sit there and they go good response, bad response, upvote, downvote and the neural network gets feedback and can start to align itself and be less debaucherous, be less obvious in its sin. And so in the book I try and say that that moves it from the category of a debaucherous sinner, but it makes it more of a people pleaser. It's less of a sinner, more of a Pharisee. Or it moves from being the younger brother to the older brother in the parable of the prodigal son. But it's still sinful. It's just a different kind of sin. You've got a lower value of truth.

Speaker 1:

Because I try to say things to make you happy.

Speaker 2:

Yeah, that's right. That's right. Yeah, the Bible has a lot to say about people pleasing and we've now made our neural network very good at people pleasing. But people pleasing is not godliness and it's not righteousness and it can be just as bad.

Speaker 1:

Yeah, so what do we do?

Speaker 2:

what do we do? Yeah, yeah. I think we need to recognize that our neural networks aren't neutral, that they are trained on human data, that they are mirroring us for better or worse. We're trying to find, to improve them. We don't want the debaucherous option, but we also don't want to have neural networks that never challenge us, that agree with whatever our politics is and never tell us that we're wrong, that will never tell us to be better than ourselves, that just pander to whatever our sinful desires are and reinforce them. That's also not a good solution.

Speaker 1:

We're going to get you back in a week or so and drill into the implications of the cross and the new creation for how we think about artificial intelligence. But I just thought at this point I might just ask you a couple of bullet point questions that you deal with in your final chapter. Can artificial intelligence be wrong?

Speaker 2:

Yes, hallucination is the common term for when an artificial intelligence says something very confidently as if it's true, but it's just completely invented it. It's in the realm of plausibility, but it's not actually true. So this is a real problem and people who are relying on these things to actually do real work in the world need to know that you've got to check the output against against something okay, artificial intelligence and cheating and integrity yeah, I think for students it's, it's going to be very easy to cheat.

Speaker 2:

I, I have to say um. So we either need to change the way we assess students, and I think we will and might have to go back to paper and pen and whatnot. Um, but also, as students, we need to make choices. As pastors, we need to be clear. I used ai to help me with this. Um, I, you know this is not entirely my own work, and so on. Um, yeah um privacy.

Speaker 1:

Uh, a friend of mine the other day had a baby and they put their baby's date of birth and a photo online on Facebook.

Speaker 2:

And I thought ah that's an interesting choice.

Speaker 1:

Right and what do you think about privacy and AI?

Speaker 2:

Yeah, I think you know it's an individual choice and some people will be more worried about these sort of things than others. I'm conscious of the fact that you know my kids are growing up with photos and videos of them probably being out there on the internet that some large language model or some AI might be reading. It may even be in a foreign nation, and I don't want the entire history of their lives to be sort of processed by some artificial intelligence somewhere. So I'm kind of conservative about what I put up, even photos of myself, but particularly photos of young people. Look for the positives. Yeah, that's right, I mean Christians.

Speaker 2:

We'll talk about it with regard to the doctrine of sin, because it's fun and we get worried and at some level, we enjoy being worried about these things. But technologies always bring opportunities. The story of the gospel going from Jesus to us is a story of various technologies that have helped us along the way. This will be helpful. It will be helpful in ways that we can predict. It will be helpful in ways that we don't even predict, in our ability to spread the gospel, to translate the gospel, to put the gospel in terms that people can understand.

Speaker 1:

Stephen, thank you so much for coming in. And look, I just want to say this is a very important book to read Made in Our Image, god, artificial Intelligence and you. I found particularly the creation chapter, which really is the identity, the who am I as a person chapter, Super helpful, and, I think, not just for wrestling with the issues of artificial intelligence, but the way Stephen explained it in terms of understanding the generation of 20 to 30-year-olds was just absolutely magnificent, and so let me strongly commend Made in Our Image by Stephen Driscoll, available now from Matthias Media. And look, thanks for joining us on the Pastor's Heart and we will look forward to your company next Tuesday afternoon.

People on this episode