The Uses and Ethics of AI


summary

In this episode we talk all things AI with Assistant Professor of Operations Research and Machine Learning at Carnegie Mellon, Zachary Lipton.

El contenido de los recursos se muestra sólo en inglés.

Announcer: Welcome to Not Your Father’s Data Center Podcast brought to you by Compass Datacenters. We build for what’s next. Now here’s your host, Raymond Hawkins.

Raymond Hawkins: Welcome again. Let’s see, today it is February 23rd. We are recording. We are approaching a year and a month into a global pandemic. Welcome again to another edition of Not Your Father’s Data Center. I’m Raymond Hawkins, Chief Revenue Officer at Compass Datacenters and I am joined by Zachary Lipton, who is the BP Junior Chair, Assistant Professor of Operations Research and Machine Learning at Carnegie Mellon University. So a translation for our listeners, that means he’s a lot smarter than I am. Zachary, thanks for joining us.

Zachary Lipton: Thanks for having me.

Raymond Hawkins: So Zachary we’ll dive in, if you’re willing, just to do a little bit of background on you. Certainly, it’s a pretty technology-centric and data center-centric audience that we speak to but found that for them, understanding and learning a little bit about who’s going to be talking to us today is interesting and engaging and makes it far more fascinating for folks. So do you mind giving us a little bit of your history? Where you’re from, certainly have your bio that you got a degree in Economics and Mathematics from Columbia, and then Masters and Ph.D from UC San Diego. So we can have a great Keynesian and Austrian economics conversation on another podcast, but for this one we’ll stick to machine learning and AI. But tell us a little bit about you.

Zachary Lipton: Yeah, sure. I mean in a nutshell, I grew up in New York, kind of a sleepy suburb of New York City. I ended up going, I guess my first kind of passion was music. So I was playing jazz music and it was a cool time, growing up in New York, and not during a global pandemic. And while a lot of the great musicians were alive, like Branford Marsalis lived in my town and actually his son and I went to high school together and I used to go over to his house and get saxophone lessons.

Raymond Hawkins: Oh, wow. Yeah.

Zachary Lipton: My main love, like what I thought I wanted to do and I ended up going to college at Columbia. I kind of had … I don’t know. I really liked music, but I also, I guess maybe part of it was the school, like learning jazz, that I came from was like sort of you learn it from the community, not necessarily from a school. So I felt like I wanted to go to university to get more of a secular education.

Zachary Lipton: So I ended up going to Columbia for undergrad, and studied Math and Economics, and then was also playing music and that was kind of like an interesting balance for me. It was cool, because I was in New York City and I could go out and play, but I also was kind of learning technical things that had this weird different daytime life and nighttime life, and that kind of eclectic existence. I felt maybe balanced [inaudible 00:02:58] right.

Zachary Lipton: And hen after I graduated from undergrad, I was just playing music mostly for a while, also had some health things that derailed me for a little bit. And then when I was getting back on my feet, it was sort of like, do you just go back to doing what you were doing before or do you just pick a new adventure? At the time, I had a really good friend who was doing a Ph.D, actually in music. He was a great pianist, and he was studying composition at UC Santa Cruz. So I had just been, New York’s not a great place to have no money and be sick and stuff like that. I was kind of like living in a rent stabilized apartment where I think the landlord’s strategy was to get everyone to move out by making it as unlivable as possible so they could eventually jack the rents up. So it was kind of this like smelly urine and [crosstalk 00:03:49].

Raymond Hawkins: Oh, tough stuff.

Zachary Lipton: [crosstalk 00:03:51] vomit on [crosstalk 00:03:51] New York. And then I just go out to Santa Cruz. I had already been inclining towards academia as maybe the last place that things felt right. I went out from being in New York, where I was coming out of a winter and this like smelly apartment, and then visited my friend at UC Santa Cruz, which is just like paradise. All the 80 year olds look like they’re 30 years old, all the produce is ripped fresh out of the ground, you have a beautiful view of the ocean. And I knew I didn’t want to do a Ph.D in music, but something about the experience of doing a Ph.D, [inaudible 00:04:31] of being at the university, and specifically of being in California on the coast, everything just kind of felt right. And so I just came back to New York, broke my lease, moved to California. I was trying to figure out what I wanted to do a Ph.D in.

Zachary Lipton: It was sort of like the decision to go to grad school preceded the decision of what the field would be. And I was kind of mulling it over, and I felt like maybe I’d want to do something in life sciences and so I was exploring that. At some point or other, I had self-taught some amount of computer science out of undergrad. And that had been something that clicked with me earlier and it seemed like kind of a ridiculous idea that someone would let me into Ph.D because I really hadn’t programmed in like seven years and didn’t really know much. But I don’t know, this idea of going to Ph.D just kind of snowballed and became sort of a weird like fantasy that I just kind of like forced into existence. I basically came back, took the GREs, broke my lease in New York, drove across the country and moved to California, got really lucky that someone found my application like weird enough or interesting enough to give me a chance at UCSD and I entered the field of Machine Learning as my chosen area.

Raymond Hawkins: Well, I got a thousand questions I want to ask you. So this is a great intro, thank you for giving us a little background and where you’re from. Your passions, love hearing the jazz musician passion, would love to get your thoughts on the movie «La La Land» at some point, I don’t know whether to love the movie or hate it. but would love to ask you some specific questions around things that you do today. Before we do that, I’m going to do trivia question number one, this again, email us at the show with your answers. Everybody who gets correct answers, drawing for a $500 Amazon gift card. Question number one, Zachary teaches at Carnegie Mellon University, can you tell us who the university is named after and what did he do to gain notoriety? So that’s trivia question number one. All right, Zachary. So you run the Approximately Correct Machine Intelligence Lab, I’d love to understand the name Approximately Correct, I know it’s also your blog as well. So give us a little history behind, what do you mean by Approximately Correct and help us understand that a little?

Zachary Lipton: The name is a little bit of like a play on a few things. So one thing is in learning theory, the kind of like one of the like canonical framings of learning problems is this probably «Approximately Correct» learning. It’s like, what can you say about say some predictive model given that you train out some samples and you could say, «Well, with high probability, it is close to the optimal predictor.» Or something like that. So it’s like, you can’t exactly solve for like the exact thing you are after, you’re constrained by the fact that you’re learning from data, but you can produce something that is approximately correct and with higher profitability, you could be closer to the right answer as you get more and more data.

Zachary Lipton: So that’s one play, that’s something that’s sort of familiar to any machine learning academic is that usage of it. And we do have, I’m not at core a theorist, but I do have some students that are much further on the theoretical side and we do some ML theory. So there is that kind of aspect of the play. The other side is the way just machine learning is sort of always used in the real world as a sort of not quite right, but sort of gets the job done and maybe the benefits of scale outweigh the ways you’ve misframed the problem. So it’s sort of given in this other way, not in the learning theoretic sense, but in this fuzzier way. Like machine learning is very often this sort of approximately correct solution for the kinds of problems that we’re directing it at.

Zachary Lipton: And then maybe like the third usage for me that was in mind when I started that blog, so I started that blog around the time that I was working on Microsoft Research. And part of it was a sort of annoyance with, I had very strong opinions about like kind of public writing about science and how to communicate what’s going on in research to the broader public. And one thing that always bothered me a lot was the way that people didn’t seem to communicate sort of thoughtfully or honestly about uncertainty. Like what are the things we know? What are the things we don’t know? What precisely is uncertain or what are the holes in our understanding? And I feel like this was like an important part that was like missing from the picture of a lot of writing about science generally, and something I was trying to accomplish with my way of like relating to the broader public. So I think like those three things together were the inspiration for the name.

Raymond Hawkins: Will you help me understand the difference between artificial intelligence and machine learning? I know in my industry, in the data center, we’re excited about both because they drive more capacity in our space, but I don’t think the folks that are in my business and [inaudible 00:09:46] in our audience understand the difference between the two. Can you give us two minutes on that Zachary? That would be awesome.

Zachary Lipton: My sense is that as far as the folks in your industry are concerned, so there’s like a practical answer to what these terms mean historically, and like semantically and the ways in which they’re different. And then there’s, how are they being used in the industry by folks today and what are people referring to? And so the short answer is I think as far as people like thinking about cloud compute and DNL, like using GPU instances to do AI, to do machine learning, the terms are used actually I think interchangeably. What’s happening right now is that basically there’s this way that people just brand them. It’s like some group of people come up with some algorithms, some effective ways of using data to do whatever ranking or extracting marketing insights or whatever, whatever, whatever.

Zachary Lipton: And they call themselves big data companies, then everybody calls themselves a big data company. They say, «Oh no, we’re not a big data company, we’re an analytics company.» And then everyone calls us all [inaudible 00:10:51] and we’re not an analytics company or a machine learning company. Okay, now some of those things are distinctions, but you can’t divorce yourself from the fact that a huge part of what’s going on is that people are using the nomenclature as a way of like differentiating themselves. So it’s like, I’ll just give this example of like, to show how it’s kind of clownish is like, imagine physicists like had some breakthrough and then a lot of people get interested in physics and then people say, «What are you working on?»And you say, «OH no, we’re not doing physics. We’re doing [schmhysics 00:11:23] or something.»

Raymond Hawkins: Yeah, no. 100%, agree. Absolutely agree.

Zachary Lipton: But I think there’s such an incredible amount of ignorance here that nobody knows what any of these companies are doing, they just know it involves data that companies do feel this pressure because they aren’t necessarily often dealing with like a customer on the other side who really knows what they’re doing, just like the technology stands on its own, they feel this need for like a perpetual rebranding. So I think like a lot of what’s being called, like AI was sort of a taboo word because it had a bad reputation. It was associated with an academic field that lost a little cachet, mostly owing to a feeling that it sort of like over promised and under delivered and had overclaimed a lot, had not been so rigorous. The subfield of AI that got more focused on a very statistical way of doing things and specifically fitting models from data whatever, branded themselves as machine learning kind of shook off the AI term.

Zachary Lipton: But as soon as it became super successful and super popular, mostly in the last say 10 years, owing to the rise of deep learning, which is just a specific class of algorithms and then machine learning that are based on neural networks, suddenly as it started getting popular, then people start adopting the AI term. But whereas the adoption of the word ML versus AI coincided with actually a change in, like those people were casting off a whole family of approaches and types of algorithms and stuff that they weren’t interested in focusing specifically on this sort of statistical machine learning, that actually coincided like with a real change in direction, at least among the sub community, the shift back to adopting the term AI is, as you’re seeing it now describing broadly just companies that are just doing machine learning, that to me is just marketing fluff.

Zachary Lipton: Now that said, I would step back and just point out that the two terms are a bit different historically. How deep you want to go down that rabbit hole, maybe in two ways. One is it the term AI originally meant, it was a term embraced by the people who were adopting this sort of logic-based or symbolic logic-based approach to building intelligence systems. Whereas machine learning really grew out of like a rival sort of approach to building intelligence systems that was associated with the school of academics that were called the [cyberneticists 00:13:54]. And that might sound kind of antiquated or goofy, but actually the stuff that is successful now in machine learning, whether it’s neural networks or reinforcement learning or even like control theory, these things actually are the intellectual legacy of the cyberneticists, not of the people who created the term AI.

Zachary Lipton: So that’s like the historical difference. I’d say that though more recently AI has become kind of an umbrella term. And so what I think is like the common usage among academics, at least of how they sort of break these things apart, is AI has become this sort of umbrella term for sort of the why discipline of trying to build technical systems that sort of perform tasks that until recently we thought required human intelligence. So that may or may not involve learning from data, right? So at one point in time it was all about efficient search algorithms and Treesearch things like the way they built Deep Blue to play Garry Kasparov at chess, there was no data involved in that, that was all about efficient algorithms for Treesearch.

Zachary Lipton: Then within that, is a specific family of algorithm we call machine learning. And machine learning is about algorithms that learn and by learn, we mean that they improve at some task as a function of experience as they acquire more data. So the AI is about broadly any approach that does things that we think required human intelligence, machine learning is specifically about learning from data. So in that view, which I think is like the most sort of productive and sort of maybe closest to universally held among like actual academics now of the most useful deployment of those terms, like AI you could think of as like a wider tent and ML as like a narrow subset within it that’s specifically focused on things about learning from data or learning from experience.

Zachary Lipton: And I say, it just so happens that most of the action in the last 10 years, most of the real change has been concerning the machine learning subset. To the extent that we’re suddenly now renaming it AI, I think this is speaking more to just like a need to keep the brand fresh or something from marketers to say, «Well, you were doing machine learning five years ago, so what are you doing now?» And it’s not satisfying to come back and say, «Well, we’re doing more effective machine learning.» [crosstalk 00:16:21].

Raymond Hawkins: We’re doing better machine learning. No, we’re doing AI now. Yeah, yeah, I hear ya, yeah.

Zachary Lipton: So I think that’s more or less what I have to say about that.

Raymond Hawkins: No, that’s awesome. That’s a really good academic understanding of it as well as how it applies to the commercial world, which is what most of our listeners come from. So I’m going to sneak in two more Zachary Lipton related trivia questions and then I got one more question for you. So Columbia University, where you got your Economics BA, give me the most famous investor graduate from Columbia and the most famous political graduate from Columbia. Those are your questions. Again, you can email your trivia question answers to rhawkins@compassdatacenters.com. rhawkins@compassdatacenters.com, Columbia’s most famous investor graduate and Columbia’s most famous political graduate. All right. Zachary, let’s go to unethical AI. So as you’ve given us AI as this, you called it more of an umbrella term I did a little reading, getting ready for us. You hear this term unethical AI and who should stop it? And you, I think touched a little bit on what’s the right use of facial recognition software. Can you give us two or three minutes on how to think about unethical AI, what it means and what are the questions being asked at the academic level about it?

Zachary Lipton: So maybe it’s worth stepping back and thinking broadly. So ethics is not a property of just say an algorithm in the abstract or questions about ethics or if you just say if you spend your entire world, the only thing you think about is that I have data points that come from some 1000 dimensional space and they are separable and what is the convergence rate? Being able to separate them or identify a hypothesis in some class or something, you’re not necessarily addressing a problem that directly maps onto any kind of ethical concern. However, that’s not what almost anybody, I would say it’s the vast … you can have a vast minority, right? You can only [inaudible 00:18:25] kind of stuff side of the community, that’s really concerned with more abstract mathematical questions.

Zachary Lipton: The majority of what people are doing is they’re training models on real data and hoping that by virtue of training these models, there’ll be able to create some kind of product out of it and actually deploy it in the real world. And deployment almost always consists of either making or influencing some kind of decision automatically, right? So we go back to like justice, what is justice? And if you go to like, I don’t know the Stanford encyclopedia of philosophy and you look up, they have this nice long entry on justice, you see like this sort of central definition that justice sort of concerns like rendering unto each his do or her do. And so it’s about concerns, determinations of the allocation of benefits and harms in society and some kind of questions about how these relate to what people’s rights are.

Zachary Lipton: And so how does this get back to the machine learning? When does machine learning become a concern of justice? And I think the answer is when it’s operationalized to, and somehow drive some kind of like actual decision that actually affects people, that actually affects the allocations of benefits and harms in society. And so where is that happening? And the answer is well all over the place, right? So if you look at all social media, there’s so much crap out there that it’s completely unnavigable, absent some amount of curation right now. And so what is curation? Well, someone uses machine learning to decide what people should see. And so the result is now someone’s ability to be heard is mediated by the choices made by algorithms that are for the most part trained in some kind of clumsy and ad hoc way, right?

Zachary Lipton: And that’s not to say that’s an easy problem or that people are being negligent, but rather that basically, we’re trying to solve a very hard problem that we were not quite equipped for. So what we do is we come up with proxies. So we say, «Okay, I’m going to go train a model to just predict is this user likely to click on this item?» And then I’m going to decide make this kind of ad hoc decision. That the way I’m going to show you items, I’m just going to show you the stuff you’re most likely to click. In so doing though, you’re prioritizing some content over other content. You’re amplifying some voices, you’re silencing others. Even among people that you might actually follow, you might see nothing that they share.

Zachary Lipton: This is just one instance and I mean, this is actually maybe not the typical pedagogical example that someone would go to. You’d more likely expect someone to see something and talk about the way, well predictive algorithms are used to provide risk scores. This is an area that I’ve worked in a bit with my colleague, Alexandra [Chouldechova 00:21:17] and her student Ricardo [Fogliato 00:21:21]. we’ve done a lot of work looking and she’s done much more and Ricardo, looking at the use of machine learning algorithms, basically even simple ones like simple statistical prediction algorithms, like logistic regression to train risk prediction models in the context of criminal justice. where basically you have some defendant and you basically collect some number of attributes. How many siblings do they have? What job do they have? What zip code? And how many prior offenses do they have? What fraction of them are violent, what et cetera, et cetera, et cetera. You get some number of features and they try to predict something like how likely is this person to commit a crime if released?

Zachary Lipton: And then they assign this score to every defendant and they provide this score to a judge as extensively some kind of like objective score to say who’s a high versus low risk defendant so that this can sort of inform their judicial decisions. There’s some things which are decisions that AI’s making autonomously, like which content you should see in your newsfeed, where it’s just a scale that’s so largest no opportunity to have a human, like actually interacting in the loop and making manual curation decisions. There’s other decisions like in criminal justice where these machine learning tools are being provided as a supplementary piece of information that maybe doesn’t directly make a decision, but it influences the decisions that get made like in criminal justice.

Zachary Lipton: There’s other contexts where like a credit scoring system, their automatic sort of loan approval decisions where they might be getting made automatically based on such physical decision, at least for low loan amounts, like consumer loans or like your loan to help you buy your phone or your laptop or something and it might be assisting a credit committee who ultimately makes the final determination for a much larger loan that requires some kind of human oversight. But the high level here is if you look across society at all these ways that machine learning is being used, whether it’s in criminal justice, in lending, in the propagation of information, through increasingly the only bottleneck to access it these days, which is increasingly social media, then you suddenly have these technologies being deployed.

Zachary Lipton: Even if you know that they’re framed from a technical standpoint, just as prediction problems, like predict the click from the content, something like that. Predict the click from the content user. That’s not exactly what they’re doing at the end of day, not just making a prediction, they’re actually making a decision about who to show what. Or they’re making decision about who to lend money to versus not to, or making a decision about do you incarcerate someone versus do you not? Or they’re at least assisting in the making of that decision. And then it becomes a question about, «Well, who benefits and who is harmed?» And so there’s all kinds of ways that you could see how this could start going wrong, right? Like if it turned out that your training, your recidivism model for criminal justice on some proxy, like for example, let’s say that you were training the model to predict arrest, but we already know a priori that say certain demographics of people tend to be more over policed.

Zachary Lipton: And so even if they committed crimes at equal rates would be more likely to be arrested for them, right? Then you’d be in a situation where those people would potentially be disproportionately recommended to be incarcerated, even if they were in equal risk. And so like one area that a bias creeps in here is that maybe what you mean to predict is who’s going to commit an offense, but what you’re actually predicting is the data that was available, which is who’s likely to be arrested. So there’s all sorts of context here, resume screening is another one. I think some of us are lucky enough, like I think now that like one of the nice things about a tenure track faculty position, for all the stressfulness of it is that this hope that at one day you’ll just never interview for another job again.

Zachary Lipton: You’ll just sit with your books and read and never think about it. But for the most part, most of the people in the world have to interview for jobs, I don’t know. My parents’ generation people had jobs. Like also, I think many people worked at the same company for life. Now it’s quite a dynamic world and people interview for jobs often and every few years, and over the last few years, there’s been a huge shift towards relying on these sort of automatic prediction-based tools for weeding out resumes. For deciding which resumes to pass on to interview stage versus just weed out altogether. And so on one hand, this is obviously appealing to would be hires because the volume of applications might just be so large that it’s very hard to manually look at all of them, but the downside is and the question is, «Well on what basis are people’s resume as being elevated or deprioritized?»

Zachary Lipton: And so you could think of this as like the ethics is not something that is part and parcel to the algorithm itself. But rather that, like the status of being ethically fraught is a property of the scenario. The real world decision that you’re making. Some decisions are, like if I had to train a machine learning model to predict am I going to eat Froot Loops or Lucky Charms in the morning, nobody cares. It doesn’t matter what algorithm I use, there’s no ethical import because it’s just not [crosstalk 00:26:38].

Raymond Hawkins: Slightly different and important than recidivism rates. Absolutely. Yeah, yeah. Hear, hear.

Zachary Lipton: It’s not a concern of ethics. However, on the other hand who goes to jail? Like decisions about like the carceral system are fundamentally serious questions about ethics, whether or not we use machine learning. But because they are, when we deploy machine learning in those environments, we have to understand how the decisions made by the machine learning sort of line up against those sort of like ethical considerations. And I think that’s the situation that we’re in as people get more and more ambitious about using machine learning for surveillance, you see it being used for face recognition, and there’s difficult questions to face. Like on one hand, I think a lot of people quite reasonably are apprehensive about ever allowing face recognition to be used by law enforcement and concerned about entering a sort of surveillance state and there’s questions about well then, who is heavily surveilled, right?

Zachary Lipton: And it’s like, well, if you own a bunch of property and whatever, maybe you’re not likely to be spotted at any moment. But on the other hand, if you live in densely populated areas, if you live in public housing, maybe your life will be impacted significantly. So it’s like, there is a power dynamic like to be considered in how this stuff would be applied. On the other hand, after the Capitol riot, facial recognition’s being used to sort of identify domestic terrorists. And I think most people think that’s a good thing. And so there are these very difficult questions and I’m not coming out saying like, «I’ve solved all of them or know exactly the right way that every technology should be regulated.» But to try to paint some kind of balanced picture of how these are hard decisions, there’s nothing about AI or ML that makes it like magically immune to these ethical decisions.

Zachary Lipton: Because these ethical dimensions are aspects of the problem, not of the algorithm in the abstract. And so when we start making decisions, using an algorithm, we have to think about how the algorithm matches up against various ethical [desiderata 00:28:46], just like you would for any human decision-maker in those settings. And the question then becomes, well we think we have some kind of framework for understanding how humans behave and what motivates humans, what sources of information humans have, like you think you have some kind of theory of mind for maybe that could help you to understand how judges behave or something like that. Whereas for these data-driven systems, it’s a little bit harder to wrap your head around what the failure modes might look like.

Raymond Hawkins: Yeah. I love your summary too though Zachary of a … it’s not the machine or the algorithm that’s inherently ethical or [inaudible 00:29:26] itself. It fits in the context of the larger question, which I think is key to what we’re saying here is, «Hey, what’s the problem we’re trying to solve? What’s the question we’re asking? What’s the ethics of that?» Whether we enhance it with machine learning or AI is not really the key part of the equation. If I capture what I think you were summarizing.

Zachary Lipton: I say yes or no on that. Like the AI can complicate the scenario because maybe we have some framework for how to think about a problem based on decision makers that we can relate to and we don’t really know how to parse the ways that AI might go wrong in those ethically loaded scenarios. But yeah, right. I agree with like the main part that, yeah. Even if the AI complicates things, it is still like in context of a particular situation that is already loaded with some kind of ethical import.

Raymond Hawkins: With some ethical factor carried into the problem without the computer assistant, whether ML or AI. Well, this has been awesome Zachary. I think we’ll have to have you back because you’ve been so interesting to talk to and have so much good info for us. I’m going to get one more trivia question in that is related to your history. I want to see if folks know what the mascot for UC San Diego is. That is trivia question number four. Again, answer all four questions, email us with your correct answers, we’ll get you in a drawing for a $500 Amazon gift card. Zachary Lipton, it’s been great having you. We are super grateful for all your insight and would love to have you again, on another edition of «Not Your Father’s Data Center». You’ve been great, thank you so much.

Zachary Lipton: Great. Great to meet you.