The State of Liquid Cooling


ALSO LISTEN ON
Apple Music
Spotify
Google Play

summary

Daniel Pope discusses immersion cooling technology in data centers.

Announcer: Welcome to Not Your Father’s Data Center podcast brought to you by Compass Data Centers. We build for what’s next. Now here’s your host, Raymond Hawkins.

Raymond Hawkins: Welcome to another edition of Not Your Father’s Data Center. I’m your host, Raymond Hawkins, and we are recording today on Thursday, June the 17th, to give you some perspective. We are, the world continues to climb out of the pandemic and things looking better. I got to ride in my Uber today without a mask. My driver said, “Hey, I’m fully vaccinated, are you? Let’s ride without masks.” So, things are looking up. Today, we are joined by the founder and CEO of Submer, Daniel Pope out of Barcelona, Spain. Daniel, welcome today to the podcast, and we’re so grateful for you joining us.

Daniel Pope: Thank you, Raymond. It’s a pleasure.

Raymond Hawkins: So Daniel, I got to tell you the, we talk about some unique and interesting things here on the podcast. Talking about dunking my servers in liquid is not one I ever thought we’d cover. Talk about a little bit of outside of the box thinking that is different and anxious to hear how this works. How in the world for my entire career, we’ve been worried about getting liquids on our servers and how that could ruin our data center and very anxious about where the plumbing works and what we use for fire suppression, and always liquids and servers have always been bad. So fascinated to hear this story, but before we get into how we dunk servers in liquid and keep them running, would love to hear a little bit about you. Where are you from? Grew up, maybe a little bit of work history and how you ended up with a British accent in Barcelona?

Daniel Pope: Absolutely. Thanks Raymond. Yeah, so maybe a little bit about myself then. Daniel Pope at the, I was a professional rower, but one of the started at a really young age rowing, professionally, and I guess that’s the kind of discipline you need in a data center. Right? So, the next thing I actually did apart from rowing was start a data center business at a really young age. At the age of 16, I switched on a server in my bedroom and started developing a little website and providing web hosting and email services on that server. Now, that was back in 1999, and that very quickly with that .com boom grew into a pretty large business. I ended up with more than 20,000 servers. Not in my bedroom, of course. My parents chucked me out by then when

Raymond Hawkins: I was going to say, that’s a big bedroom, Daniel.

Daniel Pope: Yeah, no, we went through the process of building, I think, four different facilities on that journey. We just couldn’t believe the scale that things were getting to. And that’s the journey where I became an expert, I guess, in data center design and operations, after four of them. And, one of the biggest challenges that we always had in the data center was cooling and how to support the next wave of equipment that was being deployed back then. I guess looking at really low densities in the racks, probably in the range of, I’d say five to seven kilowatts, most probably. That didn’t change. I sold my business in 2009 and I was still in the range of seven kilowatts per rack. Now, it was only in 2015 when we realized that a new challenge was going to get to the data center floor, which was GPU’s making their way into the rack.

Daniel Pope: And, that’s really where we started to see rack density skyrocket, and especially some specific applications and workloads really pushed into levels that were not even possible to cool with air anymore. So, we set off to develop immersion cooling technology, and that’s what something that does today. Our flagship technology is around single phase immersion cooling, essentially that means changing the medium around the server. So, instead of cooling the electronics and the server components with air, we leverage a dielectric fluid, a non-conductive fluid, which captures and transports the heat in a much more efficient way than air. And I’m sure we’re going to talk quite a lot about that.

Raymond Hawkins: Well, so, we got the rowing history, we got the I started a data center in my bedroom history, and ended up doing three more data centers after that. Give me a little bit of where you live, your personal background. I am fascinated by that you’re in Barcelona, but you sound a bit like a Brit. So, tell us a little bit of the personal front, if you don’t mind.

Daniel Pope: So yeah, I’m based in Barcelona and I’m actually born in Switzerland. Interestingly enough, my mother was working for the United Nations for some time and that’s where I was born. But immediately off the being born, my parents decided to go to the United Kingdom. My father is from East London and that’s where my accent came from, of course. I lived in the UK until the age of around nine. So, I grew up in the UK, but then at one stage, my mother said to my father, “Steve, the food and the weather here is terrible. Can we please go back to Spain?” And he didn’t really have an option.

Raymond Hawkins: How in the world could you prefer the food and weather in Barcelona over London. I mean, it’s lovely in London for the full first week of July.

Daniel Pope: That’s about it. Yeah. So, it was a no-brainer, and I’ve lived in Barcelona ever since. I feel very Spanish and specifically Catalan from this region of the world where I sit, and Barcelona is an awesome city, very forward thinking, very easy to find super great resources and candidates for all the stuff that we’re doing here at Submer. Essentially, it was in the summer of 2015 where it was as hot as today. So, I think around 35 C. I was sitting with my co-founder, Paul, in St. Paul, and we’re looking at the swimming pool. Like, if we feel so much better in the swimming pool, what would happen if we stick servers into a dielectric liquid and cool them with that. And, we started doing some initial experimentation that summer, built some of the first prototypes, started to test some of the fluids that were available, then immediately saw the benefits and the potential. [crosstalk 00:06:51]

Raymond Hawkins: Literally you’re sitting by the pool and you decide, “I like the pool better in the hot weather. Maybe my server will too.” That’s how this came to you?

Daniel Pope: I swear. Literally, that’s it. Yeah. That’s what happened.

Raymond Hawkins: Holy cow. Yeah, because as you being a guy who was designed and data centers early and you saw not much change in the density of the rack, five, seven kilowatts, before you saw that first real jump and saw, wait a minute, cooling this with air might be tough. It didn’t strike you in that environment. It struck you by the pool. That’s a fascinating story. Like you said earlier, Daniel, you said, “Hey, let’s change the medium around the server. Let’s not make it air, let’s make it something else.” Did you think about it from a single server perspective or were you thinking about, “Hey, I’m going to have a really dense rack and let’s do it for a rack?” Or did you really think, “Hey, this might be a way to manage an entire data center?” Tell me about those early thoughts after the pool.

Daniel Pope: So, we were looking at it from the rack level to start with. Obviously all the first prototypes were at the server level, and you can see tons of those photos on our website, but actually if you go on to the Wikipedia post for immersion cooling, one of our first prototypes is on that page. So, we started testing the thermals at the server level and at the chip level, but obviously we tackled the design from the rack level and how we would, how could we deploy these new liquid cooled racks without disrupting the rest of the data hole and the rest of the data center design? That was kind of the key thinking. I’m now further into this journey. We’re really looking at it from the whole data center point of view. So, what does a hybrid environment where you have low density equipment and higher density equipment look like? And what benefits can one of these facilities leverage by rolling out immersion cooling?

Raymond Hawkins: So, as I think of dipping my servers in liquid, it makes me incredibly anxious because of three decades of worrying about the stability of my servers. As you thought about that for the first time there’s challenges there, right? I mean, first of all, just putting liquid around the servers scares me, but I think about the discs, I think about the plugs, I think about IO devices, whether that’s drives or thumbnails. How do you start to think through those problems, and what do you have to do to the server to make dipping it in liquid an okay thing?

Daniel Pope: One of the things that surprised us first was when we were running these tests on the single server configurations, how simple it was for another that was designed for air to actually work in an immersion environment. So, we didn’t need to change a lot of things, and I’ll go now into the things that do need to be adjusted. But, essentially we removed the fans. We made sure that the bio’s alarms weren’t going off when the fans were removed and we didn’t need to do much more. Back then, in 2015, maybe SSD drives weren’t as common as they are today in the server space. We can leverage spinning disks. That’s the only thing that we can’t leverage in immersion. The only spinning disks that can be leveraged are helium seal drives. But, a traditional spinning disk is not pressurized. It’s an ambient pressure, and it has a hole in it to make sure that that’s always the case.

Daniel Pope: So, obviously through that hole, the fluid gets in and the speed at which the disk speeds is humongously reduced then. So, it becomes useless. But, solid state drives, NBME flash drives, helium sealed. They’ll put film perfectly in immersion. When it comes to the design of the nodes, ideally, you’ll be, this is a tank, it’s an immersion tank. So, look at it as like a big, deep freezer kind of system. And, the biggest challenges was back then, you can’t reach the back of the rack, right? That’s one of the biggest challenges for immersion, I guess. Design servers that are designed to be manipulated from the front and the back are a substantial obstacle.

Daniel Pope: So we’ve, been leveraging some standards that are out there, like the OCP systems, open compute systems, and hyperscalers leverage. In immersion, we’re a platinum member for OCP, because they’re designed to be only manipulated from one side of the rack. And, they have even an additional benefit, which is the power is distributed instead of through cables, through bus bars, through power bus bars at the rear of the rack, which you our case is at the bottom of the tank. And it makes it super interesting because we lose hundreds of cables in a facility that we don’t need. And, it simplifies.

Raymond Hawkins: The rest just goes away, right?

Daniel Pope: Yeah. But then in the 19-inch type of form factors there’s lots of servers that can be leveraged perfectly in immersion now. The whole idea of just having everything on one side of the rack is becoming more and more common. You see it not only in OCP, but also in open 19 and in some other standards. So, that journey is much simpler now, I guess.

Raymond Hawkins: So Daniel, I didn’t even think about that. I mean, you’re living it every day and have for years, but the challenge of we’re not doing liquid immersion standing up, right? I mean, you almost think about I’ve turned the rack on its side and I’m sliding the servers in from the top so that the fluid stays all around it. Right? Because otherwise, if we stood the rack up vertically, right? It’d be hard to keep the fluid at the top. I got you.

Raymond Hawkins: So, I’ve laid my rack down. I’ve got access from the top and I’m dipping it into the liquid from the top. Like you said, a tank. It just took me a minute to think through why that works that way.

Daniel Pope: That’s correct.

Raymond Hawkins: Servicing all from the top instead of the backside. Got it.

Daniel Pope: That’s right. They’re horizontal tanks. The servers are installed vertically instead of horizontally, I guess. Yeah. And then the first question that would come to mind is, “Oh, then it uses up double the floor space than a vertical rack. Right? And, we don’t have the height.” So, one of the first questions that pops up is, okay, so then this must be lower density than a traditional data center deployment because you don’t have the height. That’s one of the most common questions.

Raymond Hawkins: Right. More physical footprint. Right.

Daniel Pope: So, from a server use perspective, maybe you do have less server use per square foot. But, from a density perspective, the density is tenfold. So, we are deploying today immersion tanks that are in the range of a hundred kilowatts that operate with extremely warm water, which means that the overall facility PUE is reduced to around 1.04, 1.05.

Daniel Pope: And, that doesn’t account for something which is really hard to assimilate, I guess, if you’re not used to immersion, which is you remove the fans from the systems. The very moment you remove fans from servers, you’re typically reducing the IT power load anything between seven, even up to 15 or 20%, depending on the system design. So, all that power, which is…

Raymond Hawkins: And how hard those fans are working. Yeah.

Daniel Pope: Yeah. That’s considered compute in a data center because it sits inside the server. Although it’s nice and [crosstalk 00:14:54]

Raymond Hawkins: Right. Because it’s inside the server. Right.

Daniel Pope: Yeah.

Raymond Hawkins: Right. Understood. Yeah. Considered IT load, yeah. Not considered heat rejection. Right. I want to make sure I’m following some of this. So, I’m taking this tank, for lack of a better word, I’m laying a rack on its side. I know it’s not a rack, but I mean, I’ve physically in my mind, I’ve doubled the physical footprint, but instead of having a rack, that’s 15 kilowatts or let’s even just go with a really aggressive 20 or 30 kilowatts in the rack, I can now do a hundred. I might be using twice as much physical floor space, but I’ve got, I can cool up to a hundred kilowatts in a, what would be considered a conceptually, a 42 U tank? Is that approximately right?

Daniel Pope: That’s correct. And there’s other substantial parts of the infrastructure that are not needed anymore. Like the crack units or crawl units or air handling systems, et cetera, which tend to use a lot of floor space in the data center. All that goes away as well. So, we don’t need to think only in the actual rack itself, but all the support and infrastructure to cool those racks that also goes away with immersion.

Raymond Hawkins: So yes, if I was a completely immersion data center, I could do away with all air handling. As I think through that tank now laying down and having a hundred kilowatts of IT load in it. In a normal, in an air cooled data center where I have hot aisles and cold aisles, and I have a certain pitch, I need to have a certain distance between the front and the back. And I’ve got to manage that air, all of that. Do you need any distance between the racks? Can they line up next to each other? I just don’t think that the, I can’t think of thermals impacting the air. So, I guess you could stack them all right next to each other end to end. Is that practical other than physical access?

Daniel Pope: That’s correct, Raymond. So, what we typically do is we deploy them back to back and side to side, which means that you end up with, let’s say, islands of tanks. The tanks don’t dissipate. They dissipate less than 1% of the IT load into the actual data hall. So, air renewal is very, very basic. That means we’re capturing essentially 99 plus percent of the heat that the IT equipment is released in and transporting it in that warm fluid and water loop subsequently.

Raymond Hawkins: So Daniel, what is the tank fluid renewal look like? Is the tank set up and it’s good, or are you having to pump fluid from somewhere or exchange the fluid or is it pack it all on the tank? Do you mind talking a little bit about it? And when I think of liquid cooling, I know I’m running a chill water loop and that’s a totally different solution. I think the water or the fluid is moving. What’s happening inside the tank with that fluid as it warms up or cools down?

Daniel Pope: So, the design that we have here at Submer, the tanks don’t have a single hole in them, which really guarantees that they’re leak-free and very easy to manufacture as well. The immersion fluid that sits in those tanks is just gently pushed through the IT equipment, the speed of which the fluid is controlled and pushed through the equipment, that’s all controlled by a cooling distribution unit, a CDU that sits inside our immersion fluid. It has a server form factor, and it sits inside the tank as well in this like another server, essentially.

Daniel Pope: So, that device, what it does is it make sure that the fluid is constantly moving. And, it also does the heat transfer between the immersion fluid and the water loop. So, the CDU has two quick disconnect hoses that come from the water loop to deliver the heat from the dielectric fluid to the warm water loop.

Daniel Pope: The dielectric fluid does not evaporate. It’s surprising. It’s a fluid that doesn’t evaporate bacteria, doesn’t can’t grow in it. It’s non-toxic, it’s biodegradable. You can drink the stuff, although it doesn’t taste well. We have not worked on the flavor of it, but it is super safe. If it goes in your eyes and your mouth it’s absolutely okay. There’s zero risk when it comes to that. And it’s not a fluid that needs to be topped up. It’s designed to be truly part of the infrastructure, part of the cooling infrastructure.

Raymond Hawkins: Wow. So, okay. So, the fluid doesn’t evaporate, it’s not dangerous. And, it is, I guess, absorbing the heat from the servers then going back towards your CDU and swapping that heat out with a water loop? Is that what I heard? Did I understand that right, Daniel?

Daniel Pope: That’s right. So, we capture the hot fluid at the top of the tank through some channels that we have, that fluid goes into the CDU, the heat gets exchanged to the water loop. And then we re-inject the cooler fluid into the bottom of the tank in an area that we call the fluid distributor, which evenly distributes the fluid across the lower end of the tank again, so that we can start, we can commence that process again and again. Maybe something I didn’t mention, but the fluid has an expected lifespan of 15 years.

Raymond Hawkins: Oh wow,

Daniel Pope: So, it truly is a piece of infrastructure.

Raymond Hawkins: Yeah, it’s going to outlive your servers. So, it’s got plenty of shelf life.

Daniel Pope: We really refer to it’s a future proof system, which maybe today a hundred kilowatts is a bit too much for some of the IT systems that you’re rolling out, but you’re investing in a piece of infrastructure that in 15 years time will be able to dissipate a hundred kilowatts from a rack. So, if it’s not today, it will be tomorrow.

Raymond Hawkins: All right, Daniel, so this is a really practical question. So, I’ve got a server, it’s sitting in a tank, it’s running. Things go wrong with servers. It happens all the time. The fact that we have no spinning components helps, but still some so, right. The disks are not, we don’t do spinning disk and we don’t do fans. Two spinning components that break a lot in servers. So, I think you might actually help my meantime to failure by server by taking the spinning fans out. But, I’m still going to have something break on the server. What happens when a technician comes in and his server is covered in this fluid? How do you service a machine? How does that, what’s a technician do? How to data center technicians need to be trained, because this is a totally different paradigm thinking about that the server is inside a fluid now?

Daniel Pope: Yeah. So, typically we train data center personnel in a half a day training session to get them up to speed, to be able to do the same tasks that they do in traditional racks and immersion. So, it’s not a two-week course or anything like that. And, the process is quite simple. You just need the right tools. You will be wearing gloves and some goggles, probably just to make sure some gloves and protection goggles, just to make sure that if the fluid does go into your eyes, you don’t get scared and drop the server or something like that.

Daniel Pope: But, essentially we have maintenance rails lying on top of the tank that you can move along depending where you want to pull it server out. Then, depending on the weight of the server, you’ll either pull it out manually, or you’ll use a server lift to lift it out, and you lie it on top of these maintenance rails where you can remove whatever, replace whatever component you need to replace. And, essentially you put the server back in. So, you’re not taking it away from the rack or the tank, in this case. The maintenance task is done immediately on top of the tank so that any dripping liquid just falls into the tank. And you can run that process in a very clean and tidy manner.

Raymond Hawkins: Daniel, I’ve got to ask a silly question, do I have to pull it out of the tank and it sits for an hour to be dried? Can I work on it? I mean, if it’s running with liquid on it, I can work on it with liquid on it. It’s doesn’t have to be perfectly dry. Right? I mean, I know that’s a silly question, but as I think through it, do I have wait time?

Daniel Pope: So, as I mentioned, the fluid, it’s quite surprising because we’re all used to seeing thing fluids dry and evaporate and essentially disappear. But, if you went to leave a server that you’ve extracted for a whole year outside of the tank, the fluid would still be in it or on it. It does not evaporate. It truly does not evaporate. So, you pull it out and you immediately run the maintenance on that node. Even with the components all soaked in the dielectric liquid. The dielectric liquid, although I guess we’re not used to seeing electrical components looking like they’re in water.

Raymond Hawkins: Wet.

Daniel Pope: Wet, essentially, it’s not really it’s non-conductive, it’s eight times less conductive than air. And that’s kind of the most surprising initial experience that that operators will have when they run through this exercise.

Raymond Hawkins: Yeah. It’s got to be a little bit of a mind-meld to go, “Wait a minute, there’s liquid on my server and it’s okay.” I’m assuming you get over it fairly quickly, but it just seems to the mind like it’s not the way it’s supposed to be. But yeah, if it’s running in liquid, you ought to be able to be maintained in liquid. That makes complete sense. And, so you don’t need to turn it on its side and let all the fluids run out, all of that. You can just work on it and slide it right back in.

Daniel Pope: It’s a fluid, which is super innocuous to the IT components. It’s protecting them from dust particles, from electrically charged particles. So, going back to the meantime between failures that you were referring to before, first there’s no moving parts in the system. So, that already is a humongous improvement. But then, because you have better thermals in the systems, the components in general are cooled much better. And, there’s not this variance between the front and the back of a rack or the bottom and the top of a rack. It’s all really identical across the tank. And, you add to that, the fact that there’s no dust particles, no electrically charged particles being blown aggressively through the server plane. What you see in immersion is a two thirds drop. So a 60% drop, let’s say, in hardware failure rate compared to traditional deployments. So, we have customers that don’t touch immersion tanks in a whole year. It’s quite common.

Raymond Hawkins: So Daniel, what does the typical install look like? Does a typical install look like? I’ve got some very intense workloads and I need a few tanks in my data center. Does the typical workload look like I’ve got an edge deployment that I need to do cooling, and I don’t want to set up all the infrastructure for cooling? Or is it a full-blown data center where instead of air, I’m just doing a whole room full of tanks? What’s typical for you guys, or is it span all of those?

Daniel Pope: It does. It has moved a lot into edge. And, if you go on our website, you’ll see that we have a specific product for edge called micro pod, which is an immersion tank with an integrated, dry cooler on its side, designed as a ruggedize device that can be placed directly outdoors. We have a lot of customers in the telco space that leverage it for base stations and edge infrastructure, but also customers in the industry 4.0 space that deploy these compute racks on the factory floor to manage their robotic platforms and things like that. So, it’s edge infrastructure where you don’t, either you need to protect the IT equipment from a harsh environment, or you don’t want to build the whole infrastructure.

Daniel Pope: And on the other side of the spectrum, our most common deployments are, it’s the small pod platform. So if there’s bigger 45 U immersion tank. Tens of them in a data hole. We don’t believe that data holes or data centers, let’s say, will be a hundred percent immersion. But, a lot of our customers today are building for a scenario of 80, 90% in immersion and 10% in air. And that’s obviously, there’s always going to be lower density equipment if there’s no justification to put it into immersion.

Daniel Pope: So, they just have they’ll split the data hole. They’ll have a small area, which is cooled by air and where they have their routine equipment and their old legacy systems. AS 400, you name it. And then they’ll try and build a hyper-converged type of infrastructure where they can just replicate this tank design, which has a lot of hyper-converged compute and some high speed networking equipment, and replicate that building block ten number of times.

Raymond Hawkins: So I’m going to ask a weird, technical question. In that hybrid environment where I’ve got some legacy equipment, could I take servers and put them in a tank and run a disc subsystem that is spinning drives next to it and connect to those servers? Is that doable? Is there a back plane or a way to connect the tank to traditional spinning disks that aren’t submerged?

Daniel Pope: Yeah, so the tank it’s designed as a rack to the extent that you can even, we have an area called the dry zone, which is either where we, if we’re using standard 19 inch equipment, we’ll deploy the power distribution units there, the typical U zero, U rack PD Us that we’ll deploy those horizontally on the side of the tank. We have customers that they typically deploy the top of the rack switch in the immersion tank as well, but customers that choose to deploy it on the dry zone. So, there’s a dry zone on each side of the tank that can be leveraged for this. And, it’s also leveraged for cable management. So, getting cables in and out of the tank towards the standard rack infrastructure, where they need to connect the immersion tanks to.

Daniel Pope: So, a lot of the customers, the up links are sitting in the air cooled go to the air cooled portion of the data center, where they have their core distribution switches and Cisco routers and so on. And, the immersion tanks are designed in a way that when you put them one next to another, and back to back, they have lectures to allow you to communicate cabling between them and interconnect tanks. And so on.

Raymond Hawkins: Well, Daniel, you’re sitting by the pool in a 35 C day, and you say, “I like it here. My servers might like it here.” So, I get how the inspiration came about and let me cool my server in an efficient way in the data center. But, as I think about where our industry is headed and the talk about the data center industry being responsible about its power consumption, and as the world continues to digitize, what percentage of the planet’s energy do we use to power all these servers? So much of that is cooling. I can see a massive advantage from a power consumption perspective for submerging your servers. Could you just take a little bit of a time and tell us how you see this from a global environmental perspective? How submerging servers can change what’s outside the data center, not just what’s inside the rack?

Daniel Pope: Absolutely. Yes. So, the first thing is actually a floor space. So, we’re talking about a reduction typically in the range of one 10th of the floor space that’s required. That’s the level of density that we tend to see in the new designs. So that’s the first, I guess, humongous benefit when it comes to how many millions of square feet we need for these data centers. And, that’s also because there’s a lot of these components that I mentioned inside the data hole or around the data hole that we don’t need anymore.

Daniel Pope: When it comes to the infrastructure that sits outside, well, so immersion cooling will typically eliminate or reduce the peewees I mentioned to something in the range of 1.03, 1.05. Approximately that’s where it tends to be. That means that you’re slashing by 60, 70, 80%, the typical data center PUEs that are out there. So, that’s the immediate benefit of deploying immersion cooling. Plus, you have to consider this humongous reduction in the power consumption from the IT side of things.

Daniel Pope: So, as you transfer infrastructure into immersion from all the cooling capacity that you’re freeing up and the fan capacity that has been made available by removing the fans, you up with a much bigger IT load, critical IT, available IT load, versus the cooling infrastructure. What we think is super, is really exciting apart from the PUE of course, is that all this energy is now captured in a warm water system. And, that warm water system today is operating at probably something in the range of a hundred C, but we’re working towards making sure that it operates in the range of 120 C or 130 C sorry, 120 F 130 F. That’s where we are today. And, we’re on the journey of getting that up to 160 Fahrenheit, 170 Fahrenheit.

Daniel Pope: And when you have water in that temperature range, you can do some very exciting things, like deliver it to a district heating system, which is quite common here in Europe. And we’re seeing more and more of that happening, but you can also enter into kind of symbiotic relationships with your facilities and neighbors on supplying them with energy, with this warm water, transferring to business parks or industrial parks. We believe that, somehow I convinced the future data center site selection. The primary criteria for selection will be the energy monetization rate of factor. So people will start selecting sites based on a new capability in their data center, which is just going to destroy all the TCO models that are out there, and that everyone’s designing against today.

Raymond Hawkins: It’ll stop being, how much do I pay per kilowatt, but how much can I sell my thermal per kilowatt? Turning that whole equation on its head.

Daniel Pope: Today, it’s megawatts. Hundreds and thousands of megawatts that are just getting released in ambien air. And that’s something that we’ll have, or has today the potential to be monetized. And here in Europe, there’s some really aggressive policies to push data centers in that direction, and really start thinking about these types of implementations. The technology to do that is now available. In the temperature range, which is directly ready to be plugged into the new district heating systems that are being built. So, we believe it’s super exciting times for the data center industry, and it’s an opportunity to transition from being a burden for the society and your neighbors to be in an actual benefit and really allowing the data center industry to be seen in a completely different way as a power and energy supplier to the community.

Raymond Hawkins: All right. So, the future is bright and it’s submerged. How about that, Daniel?

Daniel Pope: Absolutely.

Raymond Hawkins: So, let’s close this up, born in Zurich, grew up in London, live in Barcelona. Which football club do you support? I mean, this has got to be a challenge for you.

Daniel Pope: That’s a no brainer, it’s obviously football club Barcelona. Yes, absolutely.

Raymond Hawkins: Okay. All right. Very good. Well, Daniel, this has been great. I’m super fascinated. It’s still hard for me to wrap my head around components that are wet, but glad that you guys have figured out a way to do it. And, I’d love to see where the future goes with submerged and how much it changes the data center street as we think about how we’re burning up megawatts all over the planet and how do we do it in a more environmentally friendly way? Been super great to have you, really grateful that you spent the time with us and look forward to seeing where things go for you and Submer.

Daniel Pope: Thank you, Raymond. Always a pleasure. Stay safe.

Raymond Hawkins: Thank you again. Take care, bud.