Ethics of AI with Peter Judge

ALSO LISTEN ON
Apple Music
Spotify
Google Play

summary

A great conversation on the AI ecosystem and what is shaping the ethical questions.

transcript

Raymond Hawkins: Well welcome listeners to another edition of Not Your Father’s Data Center, I am Raymond Hawkins, recording here on the… I think the 11th of August, so sometimes… no it’s the 12th. Sometimes I lose track of the days. Here at the 12th of August in our Dallas headquarters, and we are joined today by my friend and now repeat guest, the global editor of Data Center Dynamics, Peter Judge. Peter, last time we did this, I didn’t get to see your face. So this is a thousand times better. Peter, thanks for joining me again.

Peter Judge: Yeah, okay. So when you ask for the next one to be audio only I’ll understand.

Raymond Hawkins: That’s right. I know I qualify as having a face for radio. You I think are a little better off. But no, we’re super happy to get to do this. And our listeners I think. We definitely see higher engagement now with these video recordings, so good stuff, and we’re glad to do it. So Peter, for folks that weren’t with us, I think you and I recorded our first session together almost a year ago. I think it was last summer. I should have looked it up, but I think it’s been about a year. Do you remember the date? I don’t. I know it was last summer I think.

Peter Judge: Yeah, that’s right, we were in the middle of a pandemic and sitting at our desks and so much has changed since then.

Raymond Hawkins: Yes, yeah, a lot. Yeah, I was just going to say, a lot has changed except that not much. Yeah, the world understands the pandemic a lot better, but man, never thought the world would look this way. It’s been an incredible change, but if you don’t mind Peter, I’d love for you to take, I think, as we talk about things that you know really well and areas of your expertise, for folks to get to hear a little bit about you, I know for some of the folks that didn’t hear the first time, do you mind giving us two or three minutes on just you? How you got in the publishing business, how you got into writing business, how you got in their business to write about technology, and then how you got in the data center business. And then a little bit about where you’re from. So I think our listeners always enjoy hearing the who, before they hear the what. So do you mind giving us a little Peter Judge background?

Peter Judge: Yeah, that’s fine. I studied science at university. And I did also do a little side degree in art after that. But found myself afterwards, just falling into a job of this as being a technical editor and writer, round about the time that PCs and then the internet were appearing. And essentially from then, till now, it’s been a continuous process of seeing everything coming and yeah, cycle after cycle of change happening so that we… there’s always more to write and understand. So networks, security, and now the data center, digital infrastructure revolution. It’s kept me busy, interested, and engaged so far. And I don’t see that changing anytime soon.

Raymond Hawkins: As we do in every episode of Not Your Father’s Data center, we have trivia questions. So please feel free to email your answers. You can email me directly at rhawkins@compassdatacenters.com. Feel free to answer everyone who gets the answers correct, will win Amazon gift cards, so mail in your answers. And the first one is, all of them this time are multiple choice. The first one being, “In what year was the Turing test proposed? A, in 1948, B 1949, C 1950 or D 1951.” Those are your multiple choices, “For what year was the Turing test proposed?” Question number two, “What was the name of the first computer system to beat the world chess champion?” Was it A Big Red? B Yellow Dog, C Green Machine, or D Deep Blue?” Trivia question number two, “What was the name of the first computer system to beat the world chess champion?” And for our third question, “According to Statista, how big will the AI market be by 2025, just four years from now? 126 billion, 221 billion, 301 billion or 350 billion?”

Raymond Hawkins: “According to Statista, How big will the AI market be in 2025? 126 billion, 221 billion, 301 billion or 350 billion?” Email your answers to me at rhawkins@compassdatacenters.com. We look forward to hearing from you, and as always, we are super grateful for Peter Judge joining us. So Peter, let’s take a minute. You recently wrote an article about AI… I’m jumping subjects here for a second, but if you’re willing to talk a little bit about AI with us. You hear this term ethical AI. I think there are questions about what do we get? And there’s certainly lots of movie subjects around what you think of. I think of maybe some classic examples. What was it? The Precogs? I’m trying to think of the movie with Tom Cruise, where they tried to anticipate people going to commit crimes that was all around some sort of artificial-

Peter Judge: Yeah, Minority Report.

Raymond Hawkins: Minority Report that’s it. I couldn’t… thank you for helping me remember. You had these movie images and then iRobot with all kinds of intelligence inside. The robots that I think Will Smith battled out. So there’s these movie images of what can AI be. But can you give us a few minutes around the idea of what is ethical AI? And what do we think of it? I think of Deep Blue winning chess games, but that dates me a little bit. What are we really thinking about when we talk about ethical AI?

Peter Judge: The first thing I would say is there are people who are much more experts in this than me. However, I think the first thought is when you can have systems that can to some extent calculate and think, and they do it fast, you reach a slightly undetermined area where you’re not sure what response you’re going to get back and where it’s coming from. So it’s really important to understand the algorithms that are behind it. To understand why they’re coming to the conclusions they are. And I think… I would have thought that ethical AI is as much about the people that are using it, as about the AI itself.

Peter Judge: So we’re not worried that AI systems are going to come to wrong conclusions, because that’s something that… what we are worried about, is if we put them into positions of responsibility that they aren’t ready for. it’s more about the uses of it, rather than the technology itself. So, it’s a bit… I suppose it’s a bit like when people say, “Guns don’t kill people. It’s people using them.” It’s the same with AI. It’s not even important to us to know whether it’s thinking or not. It’s just… if you’ve got a technology that can do something, you need to be sure that what it’s doing is what you want it to do.

Raymond Hawkins: So, Peter, could you give us a little bit… so I’m in complete agreement, I think you’re absolutely right. Could you give us a minute? I think there’s two ethical questions in AI. To your point, we program the algorithms. Right? “We” meaning humans. Right? There’s some assumption or some thought that is written into the algorithm, some hypothesis, right? And that is an opportunity for an ethical question. And then how that AI gets deployed. Are we using it to help us figure out how to cure cancer? Are we using it how to find criminal spaces that go through the airports, or are we using it for some nefarious things? So I think there’s two different ends of that ethical question. Who built the algorithm and why, and what are they… what’s their hypothesis behind it, and then how do they intend to deploy it? I think are the two I hear you describing [inaudible 00:08:42] don’t do that accurately, Peter?

Peter Judge: Yeah. I mean, what we designed the algorithm to do, the problem there is our understanding really of what we’ve asked it to do. So, I mean and lot of algorithms are just so poorly designed and thought out they don’t actually do what we think they’re going to do. So earlier in this year, someone… There’ve been a lot of talk about how we might… it’s pretty topical, that we could use AI maybe to diagnose COVID well, or to evaluate treatments well. Because [inaudible 00:09:19] even the quite of simple sort of machine learning level, just to examine all the data that you’ve got and pick out patterns that you haven’t otherwise thought of, or to examine a whole lot of chest x-rays and see which people have got COVID or not.

Peter Judge: I mean, and there was a… yeah, a professor who kind of thought, “Well, how well is this going?” And looked into all the AI projects that have been tried for helping with COVID. And found that literally none of them had helped. And some of them were even unhelpful. They were… and lot of it is just down to thinking things through before you start. One AI project gate was really, they thought they were really on something, they were showing it chest x-rays, some of them from people that they definitely knew had COVID and others from people who definitely didn’t. The only thing is that the people that didn’t were younger than the people that had it. And within the training data they’d got, their system was really good at picking out the ones that had got COVID.

Peter Judge: Take them a completely different set of data, and they looked. It couldn’t tell [inaudible 00:10:53] COVID or not COVID, all it could tell was the difference between young and old. So if you’re not sure of the data you’re feeding it, and the way in which the algorithm is looking at it, you may think you’ve got an answer and you’re not getting the answer that you think you are. So the fact that a lot of AI systems dealing with facial recognition, et cetera, inherently biased by race, because they’re trained on data coming from white people and just don’t know what they’re doing when they’re outside of that. And if the researchers building those systems, don’t spot that, because it’s not something that they’re aware of, it’ll just then get baked into the system and you’ll end up with racist AI. Not because AI is a bad thing, but because people didn’t realize that that was a possibility when they were making it.

Peter Judge: Yeah, and that’s something that even works with all sorts of technology. I mean, it doesn’t have to be as complicated as AI. If you’ve got a blood pressure, a blood oximeter, or something that measures how much oxygen is in your blood, that could be a really good early warning sign of whether you’ve got COVID. Because your blood oxygen level goes right down. However, these things have been on sale for 20 years, they’re really relied on. And it’s only now, that people are realizing that they don’t work if your skin’s dark. So, it’s again like any technology. If you don’t think of the… if you don’t understand the problem, you’re trying to solve, and understand the setup and the limitations that you’ve put around your system, you get something that’s not going to do the job. It’ll answer the question you asked, but the question you asked, may not be the question you thought you asked.

Raymond Hawkins: Right. Yeah, I was reading your article in June about AI and you talked about, “Hey, there’s this promise we can churn through these mountains and mountains of data.” Well, churning through mountains of data, doesn’t inherently change your hypothesis. It doesn’t inherently change your understanding. Just because you can churn through a bunch. If you start with the wrong assumptions, you’re just going to end up with the wrong conclusions is what I think I hear you describing right? Or with the same inherent mistakes.

Peter Judge: Yeah. Yeah. So yes, I mean simply piling more data into something, increasing the size of the haystack, doesn’t make it that much more likely that you’re going to find the needle. [crosstalk 00:13:53].

Raymond Hawkins: Or doesn’t change that it’s hay. You’re right

Peter Judge: Yeah, that’s right.

Raymond Hawkins: That’s right. Yeah. “Look, it’s a big mountain of hay, it’s something different now.” No, no. It’s still a lot of hay and we’re still looking for an answer. Yeah.

Peter Judge: Yeah. So maybe like other technologies, we’re still at a stage where it’s been oversold and we’re having to properly evaluate how useful it is without junking it all together.

Raymond Hawkins: Well, I was not aware of the two examples you gave. Those are great. The oximeter and that we clip on our finger and also the chest x-ray just assuming, “Hey, we’ve got mountains and mountains of information. We’re going to come to a better conclusion, just not necessarily.”

Peter Judge: There may be patterns in the information you didn’t know about. Yeah. And then… I mean, that’s the accidental misapplications. Yeah, you get applications which are questionable by their nature. It’s possible to use analytics, to find patterns in people’s social activities online, and then use those to interfere with things. It’s documented that that’s been used to influence public votes in your country and my country, that’s an example where AI isn’t broken. It’s actually doing something quite effective. It’s just that we’re not that happy with what it’s done necessarily.

Raymond Hawkins: Yeah, we’re quite certain, that’s what we want done with it. Right. Right. Yeah. I think that’s one of those where technology often outstrips what we think of we need to do legislatively. Right? We’re not sure quite yet how to counter… the notion that the in my country, “Hey, all speech is protected.” Okay, what qualifies as speech and what qualifies as protected and how does it get managed and is all electronic interaction, speech, or just questions that we’re not yet ready to solve legislatively I think. Yeah, interesting. Those are all great examples for me how just applying technology didn’t necessarily make things better.

Peter Judge: Yeah, yeah, yeah. I mean, in most cases you can see that things are… yes, you can pick up a small phone and realize you would rather have it than not have it. But there’s a lot more nuance to the situation than that.

Raymond Hawkins: Yeah, I was having a conversation with a friend of mine this past week and thought… because I look at the younger generation and the vast majority or such a high percentage of their interaction with their friends is digital through this tiny little screen. And they didn’t learn to read, your body language. And the social cues for when I’m saying something that’s making you uncomfortable, or when you’re ready to interject and I should pause so you can speak, all of that subtle stuff that you learned by just physically being around other human beings. I’m interested to see as this generation that has so much of their social interaction be digital, how they incorporate into the professional world. How do they map back to when they’ve got to go to an office and interact around other people that aren’t their close friends and read those social cues and communicate effectively. I wonder how… because your point… there’s so much greatness in this phone, but there’s also I think things that are changing, how we interact.

Peter Judge: Yeah. And that moved to another conversation I’ve had this week really. We’re all doing so much more through conversations like this online, through a screen during COVID, but it’s really just an acceleration of what the younger generation is going through or what society as a whole is going through. There’s some instances around there where I think this is being oversold and over promised. People expect to be able to turn the whole justice system into a remote video interaction. And there’s a lot of utopian thinking going on there that makes people think, “Yes, it’s going to be great.” But actually, check the statistics.

Peter Judge: I think, if your life’s a bit more chaotic and you haven’t got a space to do this, you’re not going to come across well in a video conference. And it’s proven that, for instance, when people who are in difficulties in there, they’re at an employment tribunal. They will win their case half the time if they’re in person in the call. They will win their case 14% of the time, if they’re doing it… If all the judge sees of them is this two inch face. It’s a lot easier to dismiss people, it’s a lot easier to not really respond to the person you’re seeing. It’s a lot easier to convince yourself that you don’t need to reach out to them and understand them. If they’re tucked away in a screen like that.

Raymond Hawkins: A two dimensional tiny view, is not the same as you and I sitting across the table from each other. It’s just not the same. Yeah.

Peter Judge: Yeah. And in a more extreme version, when people interact with people over Twitter or social media, they can be completely inhuman to them because they stop seeing them as human.

Raymond Hawkins: Yeah, yeah. I think of that phrase, “The paper tiger.” Right? Someone that’s willing to write your nasty letter because you’re not here. Right? And that’s an old term. And what we’ve gotten in Twitter is Uber paper tigers, right? Not only am I mailing you a letter who of somebody I met, no, I’m not even doing that. I’m sending a 140 character ugly message to someone I’ll never see. And in many ways brings out the worst in us. Here. Peter, thank you for joining us and thank you for spending a little time recording here on Not Your Father’s Data Center for our folks to listen to you again. You got rave reviews the first time, so we’re happy to have you back.

Peter Judge: Thanks, Raymond. Pleasure to be here and see you next time.