Data Analytics Across Industries

Apple Music
Google Play


Speedata’s Jonathan Friedmann discusses his Analytics Processing Unit for optimizing big data and analytics workloads.

Raymond Hawkins: Welcome to another edition of Not Your Father’s Data Center. I am your host, Raymond Hawkins, and today we are joined by the CEO and founder of Speedata, Jonathan Friedmann. Jonathan, how are you today, my friend?

Jonathan Friedmann: I’m doing very well. How are you doing today?

Raymond Hawkins: I’m good. So to set everybody’s expectations, I like always am in Dallas, Texas, here at our headquarters. Jonathan, where are you joining us from?

Jonathan Friedmann: I’m located in Israel in Netanya not far away from Tel Aviv.

Raymond Hawkins: All right. Netanya just north of Tel Aviv, is that right?

Jonathan Friedmann: Yeah, that’s correct.

Raymond Hawkins: All right. Well, for those of you who don’t know, our friends at Speedata are in the processor business. We can do a little bit of homework and a little bit of setup around computers and how they work. But before we get into that, Jonathan, why don’t you tell us a little bit about yourself? Where’d you grow up? Where were you born? Where were you into school? Give us a little bit of the history of Jonathan Friedmann and how you got into deciding to found your own processor business.

Jonathan Friedmann: Sure. First of all, let me thank you for having me on your podcast. It’s great to be here.

Raymond Hawkins: Glad to have you.

Jonathan Friedmann: Looking back at that. I grew up in Israel, like a standard Israeli kid. I will say that my father is a professor of law, and as such, he was going on sabbatical every three to four years. So I have a privilege of seeing the world as a child. I’ve been in England and a couple of times in the US and also in France actually visiting multiple places. So had a great journey as a kid. So after that, I went studying in Israel electrical engineering during first, second and PhD in electrical engineering. I’m a mathematical logical person, so I digged deeply into that. And then came out actually to the Israeli high tech world, and right away I got hooked up by the semiconductor business and I’ve been doing semiconductors for all my professional life.

Raymond Hawkins: So we can call you Dr. Friedmann if we want the rest of the show. Is that right? [inaudible 00:02:36] We could. Okay. We’ll stick with Jonathan. All right. You said you got multiple trips around the world and got to visit lots of places. You said you went to the US a couple times. Give us a couple US either highlights or lowlights, places in the US you either loved or are glad you don’t have to live in.

Jonathan Friedmann: So my father was a professor at Harvard. A visiting professor in Harvard. So I was in Boston for a year. I’d say I was quite small, I was seven years old. My deepest memory from that time is just freezing, coming back from school, hardly being able to lace my shoes. So that was a pretty bad memory from Boston.

Raymond Hawkins: Yeah, Boston winters are not like Israel winners, that’s for sure.

Jonathan Friedmann: And the second time, I was already a sophomore in high school. I’ll tell you two things about that. First of all, I was already then a Philly fan, so that was the last year Dr. Jay played and I got to see him. That was really a great year for me. And I got actually to see my mother was working at a hospital where Charles Barkley had some sort of injury and I got to meet him and I don’t know, his arm was the size of my whole body, but that was really one of the greatest things I remember from being a sophomore in Philadelphia.

Raymond Hawkins: All right, Jonathan, we are going to go totally off script here because for people who listen to our podcast regularly, know that I enjoy sports a good bit. And you’ve brought up two things. First of all, don’t often get a Julius Irving reference while talking about the microprocessor business. Number one. Number two, Charles Barkley and I went to school together, so I am going to grab my camera and see if we can get this, and I know they’re going to yell at me when they produce this. Can you see that poster up there on the wall? That’s Charles Barkley signed poster in my office.

Jonathan Friedmann: Wow.

Raymond Hawkins: He and I went to school together, so I’m going to get yelled at for moving that camera, but hey, it’s my show.

Jonathan Friedmann: [inaudible 00:04:57] bad when he left Philly, the 76ers, I could not take it at that time.

Raymond Hawkins: Yeah. For those of us that remember Chuck, Chuck was a 76er, that’s who he was in the NBA and the whole Arizona thing. And just identify with him as a forward of the 76ers. And you’re right, Chuck is not only a bigger than life personality, but he’s just physically a big guy. So great to have references to my friend Charles Barkley on a show about processors. Good stuff. Well, good. Well, great to hear about your time here in the States and your visits was your dad was on sabbatical. Pretty interesting stuff. All right, so you get a electrical engineering degree and you dive. I know there’s, for those of us in the US don’t have probably the appropriate appreciation for how big the tech business is in Israel. We think of Silicon Valley out here in the Bay area, but there’s a very similar gravitational pull there in Israel around technology. Is that correct?

Jonathan Friedmann: Yeah, definitely. I would say that when you look into Israel, which is a much smaller country than the US and probably the per capita engineers in high tech, it’s much more concentrated and intensive and it’s also something that Israel is so proud about. So it’s constantly on the news and everybody talks about it, and that’s the Jewish mother. It used to be they want their kids to be doctors and lawyers, and now it’s high tech.

Raymond Hawkins: Doctors, lawyers, and now electrical engineers. Got it. All right, very good. All right, so you get in the degree in electrical engineering and you get in the tech business. Do you start right off in the processor business or did you do any other things?

Jonathan Friedmann: So actually my PhD’s in signal processing when it used to be a very important part of electrical engineering. It still is, but most of the innovation has switched from that area. So even if you look at Israeli high tech, the first decade of the 2000s was all about communications. And that’s what I’ve been doing in the first decade. And there was a lot of innovations around communication. I was part of together with one of our co-founders, which is our chairman, Dan Cherish. I was part of a company called [inaudible 00:07:32] which developed actually not only communication, SOC communication chip, but also an accelerator, a high speed modem for cellular infrastructure. The company became a global leader in its market, seven to nine out of the 10 biggest OEMs. Actually even today, I think one third of the world’s cellular users are going through [inaudible 00:08:03] chips.

We were finally acquired by [inaudible 00:08:07] in one of Israeli’s largest acquisitions at the time in 2011. So that part of my life, I’ve been doing communication, accelerating communications, and after that I started looking into other workloads to accelerate and what additional things can be done in the high tech with the whole Israeli high tech, I made a switch to processors and I’ve been doing processors for the last eight or nine years of my professional life.

Raymond Hawkins: So I was guessing after the sale, you and Dan spent a little bit of time in the south of France counting your money and then decided, hey, let’s get back in and make a living. Very good, good stuff. All right, awesome. So now we’re in the processor business. So before we get too deep in the processor business, I do want to set the stage a little bit for our listeners because I think we’re going to get, especially as we talk about what you guys build specifically. So when we think about a computer there are a few devices, there’s input, output, your keyboard, your monitor, your printer, IO devices. There’s memory where your computer works on stuff immediately in its purview, the active where an application runs, there’s storage where you write your information after you’re done working on it, and then there’s a processor that talks to all of those devices and collaborates between all of those devices.

There’s some other stuff, but at a high level, that’s what we’re doing. We got IO, we got storage, and we got memory and playing quarterback for all of that is a central processing unit, which I think most of us are comfortable with. As computers have grown and advanced, there’s all kinds of extra layers of microprocessors that do things to help the CPU and to help specifically do parts of the tasks that the CPU might take more time to do. The CPU’s more of a general processing function, and there are processors that are built to do very specific tasks and that’s what we’re here to talk about today is processes that do very specific tasks, not the central processing unit, but processors that do very specific workloads often referred to as accelerators. Is that a fairly useful high level description of what you guys do and how you fit in the picture, Jonathan?

Jonathan Friedmann: Yes. Definitely. Let me just add a few more things around that. So your description actually was a very good description of computers, but mainly around home computers and PCs.

Raymond Hawkins: Yeah, personal. Yes, absolutely.

Jonathan Friedmann: I’ll add to that. Everybody’s talking today about the growth of data and the cliche that data is the new oil and really data is growing at a huge rate. And when you look at what commercial applications are trying to do, so instead of having a personal computer at home, they would have what’s typically called a server, which is essentially very similar to a home PC but maybe a little bit stronger in its capabilities. And then when you look at what happened in maybe the last decade, two extremely important trends, which everybody’s familiar with, at least in the tech business, the growth of data. Actually the explosion of data. Data is today growing approximately doubling every two years. Really exponential growth on the one hand. And then on the other hand, you would have the general purpose processors progressing but not at the rate they used to.

So there used to be something called Moore’s Law, in which the processors are able, let’s say in common language processors would be faster or give better performance, double X performance every year and a half. You can argue what exactly is the state of that Moore’s Law, but that definitely does not happen today. I would approximate that general purpose processors are improving by approximately 5% every year now. So these two trends cause a huge gap, which actually boosted the whole industry to one solution after another. It first came with the processor companies building what’s called a multi-core solution. So instead of having a single processor, they would put multiple, what they call cores on a single chip and essentially putting multiple CPUs on a single chip and that’s giving boosting performance. And then the next step was what’s today is called data centers. So you have suddenly a big problem unlike typically not a problem you would have at home, but a problem that a company wants to solve, they want to extract information from data and maybe we can give few examples for that later.

And then suddenly a single server, single chip, it takes it too long to process and solve that problem. So instead the next step would be, okay, why take a single chip? Let’s take two of them, let’s take four of them and connect them together. So they would have communications, they can communicate between one another and if the problem can be paralyzed in some way, you can hopefully close to if you have full processors, four times performance. And these are essentially data centers. And today I would say there are probably hundreds and thousands of companies which are using clusters of hundreds and thousands of nodes. Nodes is a server to solve or process the problem they want to. So just adding to your description, today, we are talking about not only what you described as a single server, but you have on top of that a full system which is called a data center and multiple server talking to one another and communicating with one another in order to solve a problem.

Raymond Hawkins: Yeah. Jonathan, I appreciate you expanding because yes, without a doubt trying to get folks to get their arms around a single computer and how that extrapolates out into servers and nodes and then whole arrays of compute that are solving large complex problems and taking up buildings worth of compute. Can I get us to take one quick detour? I love that you referenced Moore’s Law as a guy who got to grow up in technology. So I started getting paid to work in the tech business in the ’80s as painful as it is for me to admit that now, hard to imagine that was four decades ago. And living through watching Moore’s Law deliver additional compute every 12 to 18 months as an electrical engineer, you’re going to understand this so much better than I do. Why do you think that we… Because I agree with you, Moore’s law doesn’t really apply anymore.

We’re not seeing that. We used to talk about that drive, that tech refresh because processors had taken such a leap in a two year period, everybody needed to do a refresh. What’s caused that, the reality in the processor business that Moore’s Law no longer applies, why don’t we make those 100% capability improvements every 12, 18, 24 months? What’s changed?

Jonathan Friedmann: So there are two main drivers for Moore’s Law. One relate to the process mainly how we manufacture the chips and one relates to architecture improvements. The first one, which is the dominant one, actually as time went by, the transistors from which that the building blocks for the chips became smaller, became smaller and smaller, and essentially we’re able to work faster in a higher frequency within with one generation to another. And that by itself, without making the processor any better, gave a lot of performance boost. We are now approaching, first of all, moving from one process to another becomes technically very, very complex. Transistors are made of very few atoms already, and making them smaller becomes a very complex thing to do. And furthermore, now that we have gone down to such high frequencies, there are multiple other things that do not allow the frequency to go higher.

So that essentially is very there’s no growth in frequency hardly anymore. And on the architecture side, general purpose processors have been developed for three or four decades now. And I’d say all the low hanging fruit of improving processors are already used. And it’s not clear how much progress can be made. And it’s really, I think in some sense not that important for the moment because it is clear nothing going through that channel can compete with the growth of data. And it’s not just the growth of data today, the humanity today is capable of extracting information from the data in really fantastic ways and very different from what we used to be able. So there is a huge need for processing that data. We actually, let me give you just an example of we are talking, one of our customers is global ad tech company.

And they actually are telling us that they can connect between their revenue, which is essentially the revenue is built of how good they can connect users and ads that they want to give, how good they can make that correlation between the right user to the right ads and the more processing power they give, they have. Actually they tell us that every time they double the processing power, they can correlate directly to revenues the ballpark of 20 to 25% additional in revenues. So of course that makes sense only if you can double your compute without paying more than 20.

Raymond Hawkins: 5% of them, right. [inaudible 00:19:39]

Jonathan Friedmann: But this is just an example from a revenue point of view, but of course looking into what’s happening in all the health industry, the ability to analyze DNA in ways which were not possible just a few years ago is really can make tremendous things in the health industry. And we are still so far away from really being able to do all the processing that we want. So there’s a huge hunger today to processing power.

Raymond Hawkins: So Jonathan, let me see if I can in layperson non-electrical engineer PhD speak, say what I think is our setup. So we talk about, we did this simple description of a personal computer. We extrapolate that out to a server, which is the same basic construct but multiple processors and more capabilities. But we’ve run the processor as fast as we can. Everybody thinks about, to date myself, megahertz, now gigahertz. Clock speed on the processor. We’ve run that as fast as we can. We’ve made the processors as small as we can make them. There’s minimal gain now to be able to get smaller processors. We’ve crammed as many of those processors on a single, so we’ve got multi-core processors on a single wafer now. So we’ve done all the things we can do in the CPU portion of the compute world. And now what we’re trying to do is, hey, how do we get more compute power?

Hey, let’s do processors that are doing specific tasks to help the CPU and that’s how we get to this accelerator world, this world that Speedata, as you just gave a customer example, hey, if I can with greater compute power, not necessarily more out of my central processing unit, but if I can get greater compute power to match up my ads with the appropriate customer, that’s a better fit. The better fit my customer will pay more for, the greater compute power I can have to make that better fit, the more revenue I can make. So there’s a real business reason for being able to compute either more efficiently, faster, more specific kinds of problems. And that leads us to this world of a whole nother family of processors, which is the business you guys are in today. Correct?

Jonathan Friedmann: Exactly. When you look at general purpose processors, a lot of the [inaudible 00:22:03] and the power is spent on being general purpose. So they have to bring in instruction, they have to decode it, understand what it has to do. Then they have to configure the execution unit in order to do what’s the instruction has told it to do. And later on they have to execute it. So actually when you look from silicon or from power point of view, I would say anywhere between five, possibly 10% is about the execution itself. The rest of it is just for being generic. And then if you are willing to work on a specific workload and not you would be able to gain a lot from not from, okay, I don’t care. I won’t be general purpose, I won’t let any instruction come into me. I have specific things I know to do and these things I know to do extremely well. Then I can build an architecture which is very adapted to that specific workload and I can do things which are essentially orders of magnitude better. When you look into [inaudible 00:23:21] Go ahead.

Raymond Hawkins: Sorry. Yeah, so you don’t need the overhead that the GPU does to understand the instruction, to deconstruct it, to translate it, to put in an execution mode. You take all of that overhead out and let it get handled by general purpose. You just go, hey, as I like to think about an expert is somebody who knows more and more about less and less. So your processors are experts at specific request.

Jonathan Friedmann: Exactly. I think a nice analogy to that is the difference between a chef and a cookie cutter. So the chef can do anything you like only it’s going to do it very good, but if you want to make a lot of cookies and that’s the only thing you want to do, or possibly you want to do enough cookies and you do not want to put the chef on it because he’s going to work per cookie, a lot of time you’re going to take a cookie cutter and that’s going to do the job substantially more efficiently than a chef.

Raymond Hawkins: All right. I’m stealing that Jonathan. Chef and cookie cutter, that’s a good word picture. I like that. Yeah, yeah. Okay. All right. So we’re into cookie cutter processors at Speedata. That was a lot of set up. So tell us what you guys do. Tell us what your processors are specifically and if you would, I think when people think about specialty processors, I think people think about video accelerators, that’s popular in the gaming space. They think about mathematical, what do we call those, those math chips? I think those are ones that are people familiar with. Tell us what Speedata is doing and how you fit in that stack of specialty processors.

Jonathan Friedmann: So I’ll first you just gave an example, expand for a second. You look at AI accelerators, essentially AI is multiplying floating point mattresses and doing that extremely efficiently relating to our cookie cutter. That’s what they do. You look at these accelerators, they do it extremely efficiently. Speedata is looking at another workload, which is arguably the biggest workload in the data center today. That is databases and analytics. Essentially you have a database and multiple industries’ holder information in databases, and then they want to extract information from it. And you look at the public clouds, they are giving multiple services, managed services to handle that. And you look at the biggest and most important managed services in the world, you would find that they are all databases and analytics. I will mention a few of them. Redshift by AWS, which is probably the biggest managed service in the world today, BigQuery by Google, that’s an analytical tool.

SQL Server, which is not only a managed service, but also an on-prem tool, which is probably the biggest tool that Microsoft has. Oracle, their main business is databases [inaudible 00:26:34] Snowflake, Databricks, and all the big tech companies in the world today. That’s their biggest managed services. That’s what they do. So we actually looked into that workload and designed a chip from growing it up to target this specific workload, which actually today is completely dominated by CPU. So processing today is done, 99% of that workload is done by CPUs, very different from the revolution in which happened in the AI.

Raymond Hawkins: So hold on a second, Jonathan. 99% of big data processing is being done by the general purpose CPU?

Jonathan Friedmann: So big data is a big word. I would say you look at what happened in the last five years, the AI evolution, you would find that five years ago the AI just began, it was done in CPUs. You look at AI in the data center today, it’s completely controlled by the GPUs.

Raymond Hawkins: Gotcha.

Jonathan Friedmann: So they have the revolution there, the first wave of acceleration. And actually there is a big war there between for second wave of acceleration between multiple companies who are trying to do AI accelerations on top of the GPU. You look at what happens in the analytics and databases. The first revolution did not happen yet.

Raymond Hawkins: Gotcha.

Jonathan Friedmann: Analytics still completely controlled by CPUs for multiple reasons. The main reason I would say that the hardware which was on the table during the AI revolution, namely the GPU is simply not a good fit for analytics. And then the world is waiting for a grand solution for that.

Raymond Hawkins: So Jonathan, just to relate it back to names that people can get. That GPU business. I think of Nvidia, who else is in that space that names people would recognize?

Jonathan Friedmann: So GPU is [inaudible 00:28:45] also has a GPU, but their main focus in GPU is graphics. So in AI GPU, it’s basically Nvidia. There are two candidates, A&D and now Intel is also coming out with their own GPU, but Nvidia is the king of AI today in [inaudible 00:29:03]

Raymond Hawkins: Right. Right. And so as you talk about that first revolution, we think about that happen with Nvidia GPUs in that AI space and your business is trying to capture that wave in the analytics space. Is that a good way to parallel it?

Jonathan Friedmann: Exactly. We would actually claim that this work, so our opportunity is as big as Nvidia’s opportunity. It’s actually-

Raymond Hawkins: Maybe bigger.

Jonathan Friedmann: Exactly. From a workload point of view, it’s bigger and you look at what, and Nvidia is doing multiple things, but I believe more than 50% of its revenues come from AI to data center. So really we have a huge opportunity here to build a huge Israeli semiconductor company and that’s a big dream.

Raymond Hawkins: That is a great setup and I appreciate you taking time to walk us through. This is just understanding why Speedata matters and where it fits in the problem and how the problem gets set up. This has been really helpful. We get asked a lot in the data center business, we build buildings where all of this takes place and we get asked a lot, oh, you’re not going to need any more buildings. Computers are getting smaller and faster. Why do you need to keep building buildings, aren’t you worried about the future of your industry? And I always give the exact same thing you did. I said, folks, I don’t think realize the speed at which our data is growing. And there’s lots of studies and there’s lots of numbers, but I think it’s safe to say all the data in the world doubles about every 24 months.

So that means that at the end of 2024, all the data we have today, we’ll have twice as much of it in two years. And so there’s just so many things that are causing people to write data. And the thing that we love to say here, people don’t delete their ones and zeros. They want to keep that data, they want to look at that data, they want to replicate that data, they want copies of it so they can slice it and dice it and look at it and how you dig into that data. Having the data’s not that interesting, what the data tells you and what you can do by going and looking at it. And when I think about Speedata, that’s what you’re saying is, hey, let me go dig into your data. Let me go dig into that SQL and that database and let’s go find out what’s in there and what can you learn from it.

Jonathan Friedmann: Exactly. Really they are, data is really growing in a staggering rate. I actually met a company just a few weeks ago that generates synthetic data and that’s again, in order to make better analysis. And they’re synthetically generating data, which it used to be take months to generate and they now generate it in hours. So really it’s not just people working around and taking photographs or stuff like that. Data is already generated by computers. I do not see that stop. Definitely not in the near future. And what Speedata is doing essentially is building. We like to say that we are the plumbers, we are actually just building the tools and making the pipes wider and allowing for other very smart companies to extract information from the data they have and just letting them or giving them the tools and the ability to do that.

Raymond Hawkins: Gotcha. Yeah, so Speedata’s job really to be the infrastructure layer that applications would sit on top of and go, hey, I’m going to get this data to you and present it in such a way faster than you can get it if you had to go through that central processing unit and now you have it and it’s available, now you can do things with it because it’s here faster and consumable for your application in a way that’s useful.

Jonathan Friedmann: Exactly.

Raymond Hawkins: Fascinating. So does this business, Jonathan, end up… Because I think about, I watched the Nvidia business and the GPU business and it started on a card and you mentioned, I don’t remember if it was early in this call or in another call we had where a lot of what you deliver is actually on a card. That’s how the GPU business started. You just added that accelerator into your compute environment. Is that how this business? I say that was the start, and now you can buy entire racks of GPUs, not just a card, but a whole system that is an array of GPUs. Is that where this is headed?

Jonathan Friedmann: Yes, definitely. So we are actually building cards, and I’ll say these are standard cards with standard interface. It’s called a PCI interface, and they would fit in the vast amount of existing servers. So you can simply add them up to either existing or new servers and then to multiple racks of servers. And essentially in that sense, you were talking about we are actually fighting both power consumption and the growth in size of the data center. You would add our cars inside these racks and essentially get between an order of magnitude to two orders of magnitude improvement in performance without paying in space and with basically improving your performance to power or performance to cost or performance to space by an order of magnitude.

Raymond Hawkins: So that is one consideration I’d love for you to talk about a little bit that when we have racks of GPUs, they eat a bunch of energy and in the data center business, how much electricity’s in a rack, how many kilowatts we run through a certain rack, how much heat that rack produces, how much heat we have to reject then to operate the data center matters. Tell me from your perspective Speedata’s analytic processors, how are they on power consumption? I love the space savings, but how are they on power consumption?

Jonathan Friedmann: So our PCIE card would give anywhere between a multiple of 20 and a multiple of 100 in terms of performance two watt. Again, in some sense similar to what Nvidia has done in AI compared to the CPU. It’s when you look at CPUs or any kind of processor, typically, not always, but typically it’s not a big deal to get more performance if you double the power, okay, I’ll take two CPUs and get approximately double performance, but also double power. So there’s not much benefit in getting a lot of performance without improving power. Again, since we save, as I mentioned earlier, probably 85% or 90% of what’s in a standard general purpose processor. So we’re not doing all the activities the general purpose processor is doing, so we’re saving huge amounts of power and being able to do the same thing with much smaller power and in much higher speeds.

Raymond Hawkins: And you gave the number depending on the application, 20 to 100x the speed of the analysis running through an APU than just through a general purpose?

Jonathan Friedmann: Yes, we’re actually working with multiple customers and it depends on the workload, what exactly you are doing, and also actually on the data itself. So depending on the exact case, we’re anywhere between multiple of 20 and 100.

Raymond Hawkins: Awesome. Well, Jonathan, this has been a super helpful understanding of what accelerators and special processors do. Give us if you would, a few minutes on Speedata’s roadmap. You guys are a couple years old, if I remember right. You’ve raised a good bit of funding. Where are you at in the roadmap, where you headed? What does the future look like? Tell us where you are and where you’re headed.

Jonathan Friedmann: Okay, so our company is three years old, and we are currently working with multiple big high tech companies in the world to make sure that we can accommodate all their requirements to our chip. We expect to have our chip within several quarters, and with that chip, we’ll put it on a PCIE card and basically deliver it to the customers we’re currently working with.

Raymond Hawkins: So PCIE first. Do you see getting to the point where there are Speedata arrays, for lack of a better term, a whole solution that would map a series of cards? Or is that on the roadmap as I think of an Nvidia box that does all kinds of acceleration?

Jonathan Friedmann: So I think we have multiple options. We have not decided on where the path would lead us to. On a first guess we are already today working with multiple OEMs. I do not see us as a first step making something like the DGX. Nvidia not only has the PCA card, they have their own server. I don’t think we’ll do that in our first steps, but definitely I can see us doing software around our solution and looking into how we can help our customers and make their life easier. And not just giving the processors themselves, but possibly software layers to make their life easier in processing and extracting information from their data.

Raymond Hawkins: Gotcha. Well, Jonathan, this has been super enlightening for me. I really, really appreciate when PhDs and electrical engineering can make guys like me who barely got out of college understand it. So I appreciate you going slow for me and helping me follow along. Really, really fascinating hearing what Speedata is doing, hearing what the technology industry in Israel is doing and hearing a little bit about your story. And we wish you guys all the success in the world. For us at Compass, we just want more of this, more data, more people succeeding, more people solving problems, because that means that we need more data centers and that at least makes it so I can buy groceries for next week. So we appreciate it.

Jonathan Friedmann: Okay, thank you very much. Thank you for having me. We’d love to have you here in Israel.

Raymond Hawkins: Jonathan, on my next trip to Israel, I’m coming to Netanya and coming to see you. So thank you for joining us. [inaudible 00:39:59]. We appreciate it.