The Pastor's Heart with Dominic Steele
Christian leaders join Dominic Steele for a deep end conversation about our hearts and different aspects of Christian ministry each Tuesday afternoon.
We share personally, pastorally and professionally about how we can best fulfill Jesus' mission to save the lost and serve the saints.
The discussion is broadcast live on Facebook then available in video on our website <u><b><a href="http://www.thepastorsheart.net">http://www.thepastorsheart.net</a></u></b> and via audio podcast.
The Pastor's Heart with Dominic Steele
What morality should we teach artificial intelligence? - with Stephen Driscoll
A new massive ethical question has risen up with the advent of artificial intelligence.
How will people decide what kind of morality to give to their artificial intelligence creations?
There will need to be a morality. But what should it be?
The market is already making different choices.
Elon Musk has said he wants the AI behind X (formerly Twitter) to be morally flexible. He wants his AI to appeal to all people: left and right, authoritarian and democratic, kind and brutal.
Stephen Driscoll is the author of ‘Made in our Image - God, artificial intelligence and you’.
To order online: http://matthiasmedia.com.au/products/made-in-our-image
Church Suite Taster Days in Sydney and Brisbane
Check out the new church management software ChurchSuite. Gavin and Luke are hosting five taster days in Sydney and Brisbane in November.
Assistant Minister role at Village Church Annandale, Sydney
Village Church is looking for an assistant minister. Perhaps it’s you or you know someone? Could you lead our mission outreach and the ministry aspect. Plus help set vision and in preaching and pastoral care. For more info or to have a coffee email dominic@villagechurch.sydney
Financially Support The Pastor's Heart via our new tax deductible fund
Please financially support The Pastor's Heart via our new tax deductible giving page.
--
Become a regular financial supporter of The Pastor's Heart via Patreon.
It is the pastor's heart and Dominic Steele, and how to teach artificial intelligence about morality with Stephen Driscoll. We are continuing our discussion with Stephen Driscoll today about artificial intelligence. He is the author of this highly significant new book. Made in our image, God, Artificial Intelligence and you. We are now able to create computers that are smarter than us, can emote better. They are so much better in every way than us, and yet there are challenges thrown up about identity. We discussed that last time we had Stephen on on the Pastor's Art, but today the moral issues. How will it be that people can decide what kind of morality to give to their artificial intelligence creations? There will need to be a morality, but what should it be?
Speaker 1:It seems the market is already making different choices. Elon Musk, for example, has said that he doesn't want the AI behind X or Twitter to have a strong moral foundation. He wants it to be morally flexible. He wants his AI to be attractive to all sorts of people, left or right wing, authoritarian or democratic, kind or brutal, teenagers or adults. A Pontius Pilate kind of artificial intelligence that says what is truth and runs with the crowd. Well, Stephen Driscoll, thank you for coming in. You've been super helpful in this book in confronting all sorts of philosophical questions that we have with AI. Perhaps you could start with the first time a robot killed a human.
Speaker 2:Yeah, okay, To my knowledge, the first time a robot killed a human was 1979. A guy called Robert Williams was working in a factory, a Ford Motor Company factory. He climbed up onto a bunch of spare parts that a robot was stacking and he wasn't seen for quite a while afterwards. His colleagues eventually went looking for him and they discovered that the machine that they thought was turned off had turned on and had actually killed him accidentally. It was making cars.
Speaker 2:It was moving car parts around very heavy car parts and it had accidentally hit him and killed him and quite chillingly continued stacking car parts around very heavy car parts and it had accidentally hit him and killed him and quite chillingly continued stacking car parts for the next hour until it was discovered.
Speaker 1:And so the machine needed to be educated to make cars, but don't kill people. Or don't kill people and make cars.
Speaker 2:That's right, and I guess in the 70s our machines didn't have a level of sophistication to comprehend principles like that, so they were brute robots. They had no intelligence to them. But as we move into the future we will have to try and instill principles like don't kill people in machines that we operate with.
Speaker 1:And yet I mean, this is the whole thing, isn't it? You start to teach a machine and you put purpose into the machine and you talk about goals and sub goals. What do you mean by that?
Speaker 2:Yeah, so there is a significant number of people are concerned about what is called alignment. Alignment is the key word, and what they're talking about is aligning, so matching up the goals of our machines with the goals of the human race. We don't want them to have different goals to us. But alignment is a very tricky problem when you actually get down to it, because you might set a particular goal. You might say, well, the goal of this machine is to make paperclips, but it might come up with sub-goals in order to achieve that final goal, and it might come up with sub-goals that you don't like. It might well, I need to make as many paperclips as I possibly can, and therefore I'll convert the human race into paperclips so that I can make more paperclips, and so on. So then you try and specify what sub-goals it can and can't do, but the list of ways to get it wrong is always longer than the list of interventions.
Speaker 1:I mean, that's what we find with tax cheating people. That's right. They write a law for tax and then more people create ways to get around that.
Speaker 2:That's absolutely right. Yeah yeah, legalism never works. Legalism is always an intervention that fails. It just minimises sin, but there's always ways to get around.
Speaker 1:So you talk about aligning the goal of the machine with the goal of humanity.
Speaker 2:Yeah.
Speaker 1:But what is the goal of humanity?
Speaker 2:Well, if we knew that, we'd be in a good place to start with, wouldn't we? If we knew what the human species was for, then we could try and communicate this to an artificial intelligence.
Speaker 1:Okay, could try and communicate this to an artificial intelligence. Okay, so I, as a Christian, I have an answer, but I don't expect that all the programmers of artificial intelligence machines in the world to be Christian. So how is the conversation going, ethically, here?
Speaker 2:Yeah, yeah, I think you're seeing different companies come to different conclusions, some of them, as you mentioned at the start, some of them are very clear about right and wrong and they want to put that into their machine. Other ones want to have a more open moral system.
Speaker 1:Yeah, so Elon Musk sounds pretty much like the Wild West.
Speaker 2:Yes, I think that's right. And then Gemini, made by Google, is probably the more constrained moral system.
Speaker 1:Okay, what's that? Look, give me a compare and contrast between the two.
Speaker 2:Yeah, yeah. I think if you wanted to explore a controversial topic with Grok, which is the Elon Musk large language model, it would probably let you do so. It might even give you some things that feed that point of view or give you something to read or something like that Okay.
Speaker 2:So if I typed in I don't know tips on how to blow up Congress or something like that, Well, maybe that might be over the line, but if you took it a few steps back on that, you might say oh, I don't know, I don't like democracy, I'm pro-authoritarian, and perhaps the Elon Musk's grasp might be happy to talk that through with you and to actually support your point of view, whereas it might be that if you said that to the Google artificial intelligence, it would say no, no, you should support democracy, democracy is a good thing, okay, and so on.
Speaker 1:You propose or you outline that the ethical theorists behind these artificial intelligence models have come up, broadly speaking, with three approaches the modernist approach, the postmodern approach and the consensus optimist approach.
Speaker 2:That's more my diagnosis of things to try and tease out some of the differences, that you could have a more modernist approach to the ethics of AI which says this is right, this is wrong, we'll derive it scientifically, and so on. But then I present some of the issues.
Speaker 1:What are the?
Speaker 2:issues.
Speaker 1:Modernism.
Speaker 2:Yeah, I think the issue is that if you don't believe in God, where do you get your objective moral system from? How can you claim that there is one right and true moral system from? How can you claim that there is one right and true moral system If we're in this sort of world, this world of unguided physics and Darwinian processes and so on, what gives you the right to say no, this is right and this is a universal right, this is universal good, and all must agree with me.
Speaker 2:So, the classic crisis with modernism plays out when you put it into an artificial intelligence. How can you say that that is the only way?
Speaker 1:So okay, you then say the postmodern, how does? The postmodern work out their ethical framework for artificial intelligence.
Speaker 2:Well, I guess a more postmodern thing says, well, we don't have, you know, an objective truth, we have different systems, we have different truths in different places, and so on, and tries to sort of relax that claim and allow space. But the implication of that is that you really don't have much ability to say, well, no, this is right and this is wrong and this conduct is unacceptable, and we're going to try and make people more holy and more righteous and so on. You're abdicating responsibility in the moral realm.
Speaker 1:And really Musk and the approach that they're taking with X. That's gone down the postmodern line there.
Speaker 2:Yes, yes, to some extent yeah.
Speaker 1:What's this consensus optimist heading that you've given as a third option?
Speaker 2:Yeah, I think that people like Sam Harris and others are saying that we don't have a sort of a hard objective truth. But we've also got better than just pure chaos, pure subjectivism. What we have is something in the middle which I call consensus optimism, which is that they are optimistic that humans share a lot of moral presuppositions with each other, that, although we disagree, there's a common core to what we believe. Just as CS Lewis wrote mere Christianity and saying there is this common, mere basic Christianity. They have a mere morality and someone from this culture, someone from that culture, we do actually share enough in common that you can sort of work out things together. So it's an optimism about the common morality of the human race.
Speaker 1:What jumped out to me as you were writing here. You said imagine if we got an intelligence that really grasps good and evil, if it really got virtue, if it really understood the value of life and holiness and godliness, what would it think of us? And I was thinking, oh, it would probably sound like the prophet Isaiah.
Speaker 2:That's right, and I think that, well, a non-Christian might be very concerned with trying to align an artificial intelligence with the goals and directions of a human race, but as a Christian, I'm worried about that the goals and directions of the human race.
Speaker 1:Because if artificial intelligence is made in my image, there's a problem.
Speaker 2:That's right, you're right. There was a consensus, a moral consensus, that this person, jesus, should be crucified by the crowds in Jerusalem. It crossed different racial categories powerful people, less powerful people. Jesus was killed by the common morality of the day, so I'm less optimistic about what that would look like.
Speaker 1:What would an artificial intelligence that actually had holiness as? Its kind of guiding compass look like.
Speaker 2:Yeah, yeah. Well, if it was purely holy and righteous and good, it might relate to us in the way God relates to us in Genesis 6 when he sees our unrighteousness and wickedness. The inclinations of the hearts of men are wicked all the time and he wants to blot us out. Or in Romans 1, god looks down from heaven. The wrath of God is revealed from heaven against all the unrighteousness of men. It might be angry at us.
Speaker 2:There is an example of this in a couple of films. In one of the Avengers films there's an artificial intelligence called Ultron which is trained and it's given access to the internet and after about 10 seconds going through the human internet, he wants to wipe us out. He's seen what we're on about. He's seen what we do. He thinks that we're evil and shouldn't continue to exist and it's hard to argue against that. It's hard to argue against that. Yeah, yeah. So yeah, a non-Christian person might say well, we just need to align an artificial intelligence. So it's holy and it's righteous, it's good. On some level, I think that would mean the greatest existential threat to the human race, the existential threat we faced when God saw our wickedness and he was holy and righteous and good and he wasn't going to put up with it, but artificial intelligence would choose to wipe us out rather than send a son.
Speaker 2:Yes, well, I mean, that's a, that's a. That's a curious thought to explore. How would you the conundrum of mercy and justice? How would that play out in artificial intelligence? And I think one of the reasons that plays out the way it does in the Bible is because of the Trinity, that God sent his son, that Jesus was both man and God and able to stand in our place, that God could be both just and merciful. Whether you could duplicate that with a large language model.
Speaker 1:I'd be a bit dubious. Let's go to the end of your book and you finish having started with creation, dealt with sin, the issues of the cross. Then you think about the future and actually you describe, I mean for somebody a century ago. If we described life today, I mean I'm just thinking about, I'm old enough to have a grandfather who worked in a coal mine in the north of England.
Speaker 2:Yeah.
Speaker 1:And as my grandfather, when I was a young boy, described life to me at the beginning of the last century, it was terrible.
Speaker 2:Yeah, yeah, yeah, that's right. I think that technology has dramatically improved the condition of most of the planet. Particularly people in the industrial world have seen an enormous increase in their wealth, and it's always tempting to grumble about something that's lacking, but improvements in health, wealth and freedom have been extraordinary in the Western world. One example I give in the book is that in the pre-industrial world when there was a plague in a particular area, people would often go in to try and steal clothes of plague-riddled victims after the plague. When we had COVID recently, you didn't even need to think about that as a possibility. Why would you take clothes off someone that's had an infectious disease? Well, that was a tempting thing for people back then, because an extra pair of clothes might be the difference between life and death, or it might be Warmth and cold.
Speaker 1:Yeah, that's absolutely right.
Speaker 2:So we are in a very wealthy and healthy situation.
Speaker 1:And I have these opportunities for pleasure and for entertaining myself and entertaining myself on my own that are just completely unprecedented in world history and you're saying, artificial intelligence will just really open up for me even more pleasure moments for indulgence. So I'm getting to heaven.
Speaker 2:Yeah, that's right. I make the argument that in some sense, if someone from 1850 could see our world today, in Australia or any sort of developed country today, they would consider the question is this heaven? I mean, they have abundant food, more than they can eat. They're throwing it away. They have abundant water. They've got more clothes than they know what to do with and they have all sorts of entertainments that they completely customized.
Speaker 1:A customized entertainment for me.
Speaker 2:Yeah.
Speaker 1:It's going to talk to me, work out what I like and give me the next thing of what I like, even more intense than the last thing.
Speaker 2:That's right, yes. An artificial brain that has been trained with the sole purpose of making you as happy as you can be, giving you the content that you most enjoy, that you derive the most utility from, so I won't have to waste time picking a movie on Netflix that I didn't enjoy because it's just going to find brilliant ones that I really liked, yeah.
Speaker 2:That's right. Yeah, that's right. When I was younger, I used to go to internet cafes when I was at school and play video games for two, three hours and as a 14-year-old boy, that was as close to heaven as I thought I could get with my friends.
Speaker 1:And so I was really getting excited about this life that I'm going to have in five years time, that you're holding out to me. When you had a line, which was when you find something more fulfilling than pleasure, life starts mattering yeah, that's right, that's right.
Speaker 2:Um, I mean the, the, the, yeah, the. The end goal of the computer games, the video games, the entertainment, whatever it is, it might get better and better and better and it's going to get better and better, and it will and it will pull us apart, um, as families, and it will pull our communities apart.
Speaker 2:people People will spend less time with each other. They'll spend less time at church, quite possibly, but it will not hit the fulfilment button. It will not make people feel whole and good. I would not be surprised if youth mental health gets worse, even as these entertainments and so on get better. It's not fulfilling what we really need. It's like better candy, but it's not fulfilling what we really need. It's like better candy but it's not vegetables. Yeah.
Speaker 1:I mean, I'm starting to read that young people today are starting to have less sex than they used to because they've gone down that alley as far as you can and found it still empty at the end of that corridor. Yeah, right right, I mean that fits with what you're saying doesn't it?
Speaker 2:It could be that it could be a competitor. Good that you're choosing not to drink alcohol. You're choosing not to go to the pub. You're choosing not to have an unwanted pregnancy. You're choosing not to do all sorts of things that people did in prior years in theirious, in their rebellious teens. Because, instead of those things, what are you choosing? Well, you're sitting at home and you're on TikTok Gaining.
Speaker 2:Yeah, You're TikTok-ing three hours a night and that's very satisfying and it's hard to motivate yourself to go out and do what people used to do. So there's something like a 50% decline in the frequency of young people actually meeting up together in the physical world on, say, a 20-year timeframe. So 50% less people actually hanging out in the physical world. That's because there's been a replacement good, and the replacement good is why would I do that when I can go on YouTube? I have to catch a bus, I have to hang out with people that I may or may not like, it might be cold. I'm just going to lie in bed under the doona and spend three hours on TikTok. I think that's part of what's going on and if you take that another level, I think that's a big contributor to the absolute collapse in mental health for young people.
Speaker 1:Very interesting. I remember in my 20s going to London and in the UK at the time they had timed local phone calls, whereas in Australia they didn't have timed local phone calls. Okay so you actually felt on the phone that I shouldn't spend long on the phone because it was costing me every minute to have this conversation with you.
Speaker 1:Sure, and I was thinking, wow, there's something quite different culturally going on in the UK to Australia in that we could spend hours on the phone and the weather was warm, so we actually saw each other more, whereas in the UK they stayed home more because it was cold and they didn't spend hours talking to each other on the phone. And it feels like we've gone even further down that track of not talking to each other.
Speaker 2:Yeah, yeah, I think that's right yeah.
Speaker 1:And AI is going to take us further down there into loneliness.
Speaker 2:It probably will, it probably will, yeah, I think, the offerings from your room. Not just are you detached from your society, but why would you even leave your room, why would you even go to the living room if you can be so entertained, um, in your room?
Speaker 1:yeah, I leave my wife in the living room and I'm off in my study that's right both being individually entertained, that's right.
Speaker 2:It's pretty hard to find a show that you and your life, but you and your wife both absolutely love. So why don't we just watch different shows in different rooms? And that trade-off, I think, is happening all over the place. It's the curse of individualism you might be better off individually, but you lose this thing called community Each time you make that choice to opt out and to do what works for you. You're actually, bit by bit, undermining this thing called community, which relies on people actually making individual decisions and coming together.
Speaker 1:What is?
Speaker 2:unbundling. Unbundling, yeah, Unbundling is a marketing term. It's when there's some particular product or something and you go well, I want a bit of that, but I don't want to have the whole thing. You go well, I want a bit of that, but I don't want to have the whole thing. So, for instance, Foxtel back in the day would offer a whole range of different channels that you may or may not be interested in.
Speaker 2:Pay TV, yeah, pay TV, but to unbundle it would be to say, well, I just want the English Premier League, or I just want this TV show over here. Can I just pay for that?
Speaker 2:and not pay for the entire bundle put together. How does unbundling apply in a social context? Well, I think it's a really helpful metaphor to think about individualism and to think about the costs of freedom. You might unbundle a family by saying, well, I don't actually want to have a spouse, but I want to have a child. You might unbundle relationships by saying, well, I want to have the good bits, not the bad. You could unbundle a church by choosing a digital church that you can have without having to go in person. It's not necessarily a bad thing. I'm not saying that unbundling is always wrong. I'm just pointing out that there is a drift towards that unbundling is always wrong. I'm just pointing out that there is a drift towards unbundling things in our society and in our families and so on, and that there's a cost to that. And AI is a wonderful way to unbundle more and more of our lives, and there will be cost to that in terms of losing the sense of commonality that upholds the community.
Speaker 1:Because I can find somebody ie an artificial intelligence to have a relationship with, and I just have this relationship with the artificial intelligence and it's just perfect for me.
Speaker 2:That's right. Yeah, it never has a sense of humour that you don't like. It doesn't tell you hard truths or anything like that. It's always available when you are Likes the same movies that you are. There's a growing number of people having relationships of a sort with artificial intelligence chatbots, particularly younger people, who say, well, it's meaningful, you know we're friends and so on. But in my own life have you tried it?
Speaker 2:I have not. I'm not interested in having a personal relationship with AI, but I'll put it like this there are certain questions that come up in your garden or you see a spider or something and you think, oh, I should ask my mother about this. But instead I ask a chatbot and I get that advice from a chatbot. In a similar way, I read someone on Reddit saying that they took to a chatbot and they say it's so helpful. It's like talking to my mum and at that point you've unbundled talking to your mum from your actual mum. You're not calling her anymore because you've got this chatbot that sort of does everything that she would do.
Speaker 1:As I've been to visit some people in particularly nice places I'm thinking people living by the beach, that kind of thing I've thought it's hard to long for heaven when it feels like you live there. You know, yeah, and it feels like we're going to have more and more issues like that.
Speaker 2:Yeah, yeah. I think it is always very hard to preach the gospel to people who feel like they have everything they could ever need in life. I think that's right.
Speaker 1:I mean that, in a sense, was what was one of the wonderful things about COVID that it actually brought us to our knees, in a sense that we couldn't function as a society. It was beyond us to solve the problem and therefore we, at least in theory, need to reach out to God. Yeah, yeah, yeah.
Speaker 2:The Tower of Babel was an attempt to get to heaven without God, and so long as that tower was standing, you had a hope that you could achieve everything you ever wanted without God. In the picture You've unbundled, getting from heaven from God yeah. In a similar way, our wealth and our health offers us an alternate way to get everything we ever wanted without needing to go to God.
Speaker 1:Stephen Driscoll has been my guest. He is the author of this super stimulating book Made in Our Image God, artificial Intelligence and you and that is available from Matthias Media. My name is Dominic Steele. This has been the Pastor's Heart. We will look forward to your company next Tuesday afternoon.