Humanity has never been more reliant on the Internet and by association, data centers.
As the global pandemic pushed more users online, the world’s reliance on digital tools for remote working, learning and entertainment increased and the demand for cloud services grew in accordance. Data center construction projects worldwide have since been accelerated by cloud service providers and data center owner operators alike as they seek to build capacity at pace to meet customer needs.
In episode one of CyrusOne Connects, we are privileged to welcome Eric Schwartz, CEO and Brian Doricko, Vice President of Strategic Sales at CyrusOne, to the podcast.
Guided by our Host, Matthew Pullen, EVP & MD Europe at CyrusOne, Eric shares his perspective on the major opportunities and challenges facing the data center industry today. He delves into how continued demand has affected capacity, speed, and performance, as well as the impact of rising costs and inflation on the industry’s ability to meet demand.
Brian follows and leaves no stone unturned in a truly fascinating discussion about the evolution of data centers and networks to deliver against increased demand.
We hope you enjoy this thought-provoking discussion with two impressive minds, and thank you for listening. We’d love to hear your thoughts, so please don’t forget to like, share, comment and subscribe.
Visit CyrusOne website.
Eric Schwartz, CEO, CyrusOne
Brian Doricko, SVP, Corporate Development, CyrusOne
Eric Schwartz (00:06):
The requirements to source power as efficiently and as cleanly as possible are clearly growing. There are new solutions coming to us all the time, whether that's increased in investments in efficiency or renewables or the light. My expectation is that we'll continue to grow, certainly for the next five years and beyond because power is fundamentally one of the core inputs into a successful and growing global economy.
Matt Pullen (00:40):
Hello, and welcome to the first episode of CyrusOne Connects where we'll be discussing the industrialization of the internet and the increasing demand for cloud services across the globe. I'm your host, Matt Pullen, EVP managing director Europe at CyrusOne. And joining me here today is Eric Schwartz, CEO, and Brian Doricko, vice president of strategic sales, both at CyrusOne. In this episode, we'll discuss how continued demand has affected the day center industry thus far, as it relates to capacity, speed, and performance, as well as rising costs, inflation, and the general state of play across the industry right now. We'll then look at the key considerations resulting from these long-term changes, specifically how data centers and networks will evolve and what they need to do to deliver against demand. So I'm delighted to welcome Eric Schwartz, CEO CyrusOne to join us as our first guest on CyrusOne Connects. Welcome, Eric.
Eric Schwartz (01:42):
Thanks, Matt. I'm really happy to have the opportunity to participate in the series of podcasts that we're going to produce here at CyrusOne, and quite honored that you would select me to be your first interviewee.
Matt Pullen (01:57):
So firstly, you're relatively new to Cyrus One, but hardly new to the industry. Can you tell us a bit about your background and your rich history within the global data center sector?
Eric Schwartz (02:09):
My background, I've actually been in and out of the data center industry for quite some time, and it dates back to when I was three years old and my father was a graduate student working on punch cards in a very cold room and gave me the chance to tour the university data center where he was studying. Fast forward from there, I joined Equinix more than 16 years ago where I worked in a variety of capacities, starting out in the headquarters on corporate development. I went to Europe where I spent 11 years leading the development and growth of Equinix EMEA, started out in just four countries and grew to more than 15 through development and acquisition, and then got very involved with the X Scale program at Equinix, targeting the sales to the hyperscalers. And now I've joined Cyrus one just a few months ago to be CEO, and starting a new chapter. Very excited about the potential and opportunities here at CyrusOne.
CyrusOne has a great history and legacy of developing capacity for customers, operating that capacity at very high levels of performance and a set of customer relationships that I think is second to none, and that positions us very well to continue to expand the business.
Matt Pullen (03:37):
Fantastic. Thank, Eric. So given your deep experience, can you talk us through your thoughts on what's driving the evolution and direction of the industry today, I guess a state of the union, if you will?
Eric Schwartz (03:51):
Sure. I think for many of our listeners, there's certainly many different sources of commentary about the data center industry. Driving the maturation and evolution of the process, I think there are several key trends. First and foremost is the globalization of the digital economy. The leading players in our industry are now global. That's certainly the case for CyrusOne as we operate in the United States and a number of countries across Europe, and have ambitions to go even more broadly than that. Next is that digital transformation as a world trend continues to press forward, it would be premature to say that we've completed what has been a massive transformation akin to the industrial age in moving from the economy of the 20th century to what is now the digital economy of the 21st century.
And then as this transformation has taken place and has changed how we shop, how we eat, how we're entertained, how businesses operate, more or less, most dimensions of our daily lives, the data center industry has become more central to the global economy, and with that comes both the responsibility and the obligation to ensure that the data center industry is operating productively within that global economy, given the substantial impact that our industry has.
Matt Pullen (05:18):
Thanks, Eric. That that was really insightful. So it would be great for our listeners if we could just discuss each of these elements in a little more details. So firstly, the rise of global players. Can you provide your thoughts on the impact that this is having on the industry?
Eric Schwartz (05:35):
As I'm sure our listeners are aware, as the global economy has become more digital, more IT focused, there's clearly been a rise of a class of companies that we call the hyperscalers who operate digital infrastructure to scale that is well beyond anything that the world has ever seen actually. And so as those companies, some based in the US, some based in Europe, some based in Asia, continue to grow their capabilities, grow their offerings, and grow their businesses on a global basis, they are inherently building a global supply chain of which data centers are one component. And so as these global customers develop and request global requirements, companies in the data center space have to be prepared to respond to that. The leading companies in the data center industry are going to continue to build their global capabilities to better serve not only the global hyperscale segment, but also companies who may have operations in a subset of global locations or just need deep capability in one particular location, but want the confidence, the capability, and the capacity of a global player.
And so whereas global was a growth opportunity and a expansion of strategy for data center companies years ago, it's increasingly becoming a integral part of core strategy for the leading players in my view.
Matt Pullen (07:16):
I mean that's so interesting, because of course, you then linked into the obvious comment that we are now talking about industrial transformation, but I think you basically highlighted that, in your view, there's a lot more to come. So, perhaps you can expand on this point.
Eric Schwartz (07:34):
The fact that digital transformation still has quite a ways to go, we can look at a number of different trends. First, as our listeners, and in fact the world is aware, innovators and entrepreneurs now have access to capacity and resources on a scale that is beyond anything we've ever seen. So, a team of developers who can be anywhere in the world, they don't have to be part of a large technology company, can leverage the cloud, leverage the internet, leverage all of that infrastructure in a way to build new capability, whether it's a new application for end user consumers, whether it's solving a business problem for a small business or solving a business problem for a very large business. All of that capability and potential continues to be unleashed by the digital transformation that we're seeing. At the same time, technology is not holding still.
I expect that many of our listeners are now carrying 5G phones and are now seeing wireless data performance that is well ahead of anything that we've enjoyed historically. The growth of artificial intelligence has opened up new horizons of capability problems to be solved. It's actually become a geopolitical issue as well. But even thinking about the past several years where the world wrestled with the challenges of a global pandemic, one of the results of that is that we now see capabilities in telehealth and eHealth that go well beyond what was in place three, four years ago before the COVID pandemic. While we've seen quite a bit of progress and development, I believe quite strongly that the capabilities of those technologies are still growing, still expanding, still innovating, and behind that comes the need for the infrastructure that we and many other technology companies around the world are poised to deliver.
Matt Pullen (09:38):
I mean, that's fascinating, but what you're highlighting there is to effectively to the third point, which is that data centers and the digital economy are now key to modern life, which means the industry is very, very visible today as it hasn't been until this point, I would argue. As a result of that. How do you think the sector's having to react to being so visible?
Eric Schwartz (10:06):
Well, Matt, I think the sector is reacting on several dimensions. First and foremost, as a growing industry that is deploying substantial amounts of capital and infrastructure, the industry has to understand the political, social and economic surroundings that we operate in. And like other companies and industries, the demand for corporate responsibility, for sustainability, for proper governance applies to our industry as it does to many others. Now, our industry is unique in some ways, or at least distinctive in that we are very substantial consumers of power and we have the control and the position to advance a sustainability agenda and advance a broader economic responsibility agenda about the resources that we deploy to contribute to what is a broader movement across the economy to improve sustainability and to deliver a far broader environmental, social, and governance ESG agenda than has been expected in the past.
And with that focus and investment, I think the data center industry is making substantial progress, not only as an industry ourselves, but also in recognition of the fact that our customers are looking to deliver that proposition as part of their services and capabilities. And so they're requiring it of us. And by implication, we're also requiring it of our suppliers. And that value chain is continuing to deliver a lot of progress, but I think we have further to go.
Matt Pullen (11:54):
Thank you, Eric. Thanks for that. Thanks for giving us such a broad state of the nation relative to the industry. But we must talk about the big issues facing our sector today, especially supply chain and the impact of inflation. Are these short, medium, or long-term issues? And how do you see the industry reacting?
Eric Schwartz (12:14):
Our industry continues to grow, and growth results in increased capacity being delivered to our customers. To deliver that capacity requires a supply chain. And as we've seen over the past several years, when global supply chains were disrupted, the disruption in the global supply chain of equipment, of materials, and in some cases, specialized labor had a real impact on the ability of data center companies, including CyrusOne, to meet the requirements and commitments that they've to customers. I think we're on the positive side of the disruption that we've seen over the past several years where supply chains have reconfigured. Planning has gotten better. We are planning further ahead into the future forecasting requirements better as the pandemic recedes, at least in terms of the disruption it caused. The global supply chains that we and others depend on are functioning more smoothly and are getting back to a level of performance that we enjoyed prior to the pandemic.
However, we think of supply chains in terms of physical goods, whether that's generators or steel or raised floor, what have you. But capital is a critical input to the global supply chain that delivers data center capacity, and that's where the impact of inflation and the associated increase in interest rates, which is effectively the price of that capital, is certainly having an impact. As those costs rise, whether it's on the hard goods or on the capital side, the industry as a whole is going to have to incorporate those costs into the pricing structure. Customers, in general, understand those dynamics. They're experiencing something similar in terms of inflation on other elements of their supply chain, and they're experiencing the impacts of increased interest rates and inflation more broadly.
What I think it comes down to is the need for suppliers like ourselves who are sourcing through our supply chain, both our physical supply chain as well as our capital supply chain, to ensure that we do that in the most efficient way possible to meet the requirements and expectations of our customers. And that will result in pricing, and we've seen increases in pricing in various markets and for various specific offerings. And I think that will continue so long as we see the cost increases that we see. But as the supply chain hopefully stabilizes, that will take us to a more stable pricing environment over time.
Matt Pullen (14:54):
Thanks, Eric. I mean, it's tough to predict where things are going to go medium, long-term on these points. I guess we'll just see how it goes. I do know that the customers are also facing arguments around scarcity of components. And so for them particularly, there's the whole question of trying to increase product life so that we are not looking at just sort of reinventing product every few years in a commodity scarce environment, but that was fascinating relative to the industry. So, that sort of begs the question really. And it's an interesting question to ask you, bearing in mind your experience how many cycles you've seen this industry evolve through. But I'm going to ask you the question. Everything looked very different five years ago to where it is today, so where do you think we'll be in five years time?
Eric Schwartz (15:53):
Thanks, Matt for leaving the hardest question for last. I would say that looking into the future is clearly a challenge. At the same time, I do believe there are some clear trends emerging and playing out today that will continue that we can comfortably expect to see five years from now, and certainly further. First, I would say that as we've discussed the focus by the data center industry, by the suppliers to the data center industry, and by the customers of the data center industry on sustainability and efficiency, that will continue to grow. The requirements to source power as efficiently and as cleanly as possible are clearly growing. There are new solutions coming to us all the time, whether that's increased investments in efficiency or renewables or the like. And my expectation is that we'll continue to grow certainly for the next five years and beyond because power is fundamentally one of the core inputs into a successful and growing global economy.
Secondly, I would say that supply chains are going to become more sophisticated. That's going to require the expertise to manage the global supply chain that are the inputs to the data center industry and the capabilities that the industry delivers to customers. But I also think that we are going to more fundamentally question whether sourcing components on a broad scale with widely diverse and dispersed supply chains is the most effective way to deliver data center capacity, as the requirements of the industry continue to grow in scale. And I would expect that the supply chain five years from now will look materially different and materially more efficient and managed than it does today. And then finally, I would say we are already seeing, and I expect this to continue, that there will be tighter integration and cooperation between the data center industry and our customers.
We're seeing this with operations where we're sharing ever more detailed levels of information about operating performance, metrics within the facilities, metrics across the facilities and the like. I think the same thing is starting to happen on the construction side, and there will be more of that. And then I think there's the potential to even broaden that further. I mean, we are operating in the digital infrastructure space. We are all digital infrastructure technology companies, and we should be leveraging those capabilities to the maximum extent possible. And I think there's quite a bit of opportunity for that in the future. And five years from now, we'll be pleased with the progress, but also wonder why we hadn't done more of this sooner.
Matt Pullen (18:52):
I couldn't agree more. And I think this is all in the face of a scenario where demand is truly sustainable, which we haven't seen in previous evolutions. And on the basis that it is and continues to be sustainable, I think you're absolutely right. We'll see better performance from the utilities in some terms of their ability to deliver. We'll see supply chain evolution, you've said, and we'll also see enhanced cooperation between customers and those at the head of the supply chain. I couldn't agree more. So thank you, Eric. Thank you so much for giving us your time today. I'm sure our listeners will really appreciate your insights, but most of all, appreciate your incredibly rich industry experience and knowledge. So Eric, CEO of CyrusOne, thank you so much.
Eric Schwartz (19:45):
Thank you, Matt. It's a pleasure to be here and I certainly wish all of our listeners great success and look forward to the future installments of our podcast series.
Brian Doricko (20:13):
The speed of light's 186,000 miles a second, so in fiber, it's something like 125,000 miles per second. It degrades about 30%. In one millisecond, a packet can go down a wire 125 miles. That opens up sort of more where the biggest companies in the world have put their data centers in the middle of nowhere because they know that math and they know what Einstein said.
Matt Pullen (20:39):
So, hello everybody. It's great to be back, and I'm joined by Brian Doricko, who leads corporate development at CyrusOne. And Brian is a legend in the industry with just a huge amount of experience over a good couple of decades. Brian, it's great to be with you.
Brian Doricko (20:56):
Really cool to be here. I appreciate it. Thanks.
Matt Pullen (21:00):
So, we'll just get right into it. So, first question I've got for you is, what's happening in the market right now, and why? Seems crazy.
Brian Doricko (21:08):
Yeah, we're all grateful and fortunate to be in this space. Demand is really strong across enterprise, hyperscale, cloud, SaaS. All the different industries seem to be relying increasingly on service delivery to their customers, whoever they may be that's enabled from data centers, so we're the beneficiaries of that.
Matt Pullen (21:30):
And I guess you know what's sitting behind it? I guess it's all about what sort of applications are being run. And it'd be good to understand those applications, who's accessing them and from where.
Brian Doricko (21:43):
Applications used to be function based, so they'd live in one place and maybe have a backup site eventually. A bank would run an application that would crunch numbers and put interest into your bank account and they would mail you a statement. So, that did a function and served multiple functions. Today in banking, you still have that function-based application, but now there's service delivery enablement, so that could be a feed or a native application. So, now we have an application that has to both deliver to a client live, oftentimes real time, and also allow them to do things. You can trade online, you can move money online, and so forth. That's across such a wide variety of industries, how we're allowed to join our health club that way, we're allowed to get healthcare services that way, telemedicine, all these different things. So, you still have function-based delivery, but now it must be service delivery integrated with that.
So, the application is the key to all this and where we're going. And when you think of it that way, then you can define what are user requirements. What is a delivery requirement, which translates to all these things We'll probably get into, like latency delivery, why things have to be fast or what doesn't have to be fast, et cetera. So, what's happening is the way we use technology is now become so ubiquitous and is all over the place rather than just function based. That's at a very high level. So without getting into stateless versus stateful apps or mainframe distributed compute all that, what is delivered is what's changed, and that's why we're changing in this space.
Matt Pullen (23:20):
I mean, that's really, really interesting. And one thing you highlighted on there, but I just want to delve in further, it's not necessarily just who's accessing them, because I guess it's everybody, but it's from where. I guess particularly the pandemic has highlighted the fact that people work from anywhere, but people have had cell phones, smartphones in particular for a long time, and they're accessing these apps from those phones. So arguably, it's everybody and it's from anywhere, but what's your perspective on that?
Brian Doricko (23:55):
Your walk into that I think is brilliant. The best analogy I know is it took 80-ish years to get the phone network built. Every city, we all had these hardwired things that went back to a building that had a big bell on it, and there was this hub and spoke architecture. And somewhere in late eighties, we started to have these things called cell phones, and wifi started coming around a bit after that, and suddenly... Now, it took 25 years. It took 30 years, but no one would build a phone network that goes back to a central switch in a mob bell building. Now, it is any to any connectivity. So, the phone network moved that way. The way IP behaves will be the same thing. It is counter to what we just said about application requirements to deliver in a hub and spoke architecture.
So over time, and it's going to take a while, the biggest hyperscalers, the biggest SaaS companies, they're putting in fiber all over the place and trying to get access everywhere, anytime, anywhere, not forcing people to take routes like a railroad does. It's more like delivery through a helicopter, flies directly after I go to a central thing and it gives me a security access encryption key, and now we can talk directly. That's the way IP was designed. The hub and spoke architecture was required the last 50 years because there was not fiber and copper in the ground and connectivity everywhere. So, you took that one path. Today, the paths are getting bigger and bigger and they're pointing everywhere. So, what's going to happen? I think we're going to see far more distributed applications, far more things pointing everywhere. If I'm going to a SaaS place, I'm going to a cloud place, I'm still going back to my corporate place that I have compliance and medical, things that I maybe don't want in public places. So, any to any instead of hub and spoke.
Everyone understands hub and spoke. Everyone understands the way the phone network evolved. It's going to take a little while, but any to any will be the rule of tomorrow, even as people are still spending some money right now on hub and spoke.
Matt Pullen (26:08):
The word latency has got to come in at some point, hasn't it? Because we've often challenged the notion that data centers could be anywhere because of latency and the need for data centers to be close to the aggregation of the users. So just help the audience understand that, because we've gone from saying that everything used to have to operate around highways and around the ramps from the highways to a point where now there's, as you said, highways pointing everywhere, but surely latency comes to bear in terms of creating aggregation.
Brian Doricko (26:48):
We are not doing our jobs appropriately if the starting point in infrastructure isn't around, what is the minimum acceptable delivery time of a service? This teleconference, if we're doing one, a voice call, accessing your CRM online, any of the functions you have, infrastructure should start with, what is the minimum delivery time required? Must be less than that. Latency, absolutely. But just or more important is capacity. A highway that's congested that has a speed limit of 55, if you raise it to a hundred that won't make the cars go any faster. You're still congested. If you add five more lanes, the congestion goes away immediately. So, latency is the speed, and it must be based on what is the requirement for the user. Does your CRM need delivery in 12 milliseconds or 50 milliseconds? If we know the blink of an eye is between a hundred milliseconds and 300 milliseconds, I think the best dollars are spent making sure you deliver your CRM in a way that the user cannot discern anything faster than how you deliver it.
So latency is everything, and bandwidth. So, how many lanes are in a highway to deliver? I'd further suggest that we're all talking about all these apps coming, and they are SaaS and our HR functions are delivered now from somewhere in the cloud and everyone's using cloud for compute and so forth. The two applications we all know that are fastest speed, has to be the fastest, it is trading and ad-serving. I haven't heard of other ones yet. I've heard a lot of low latency things, but the fastest delivery possible, trading and ad-serving. Why? Those are perishable. If you don't get the trade and your competitor does, that profit's gone forever. If you don't serve the ad, your competitor does, that profit is gone forever. So, that's foot race. Fastest wins. I don't know of other apps that aren't... Voice roundtrip, 120 milliseconds. Can you deliver it in that?
If you can reliably deliver it in that time, would you pay extra money to get it delivered in 60 seconds? I'm not sure That's smart because that's infrastructure based design, not applications that dictate infrastructure based design. And we were talking about this the other day, you, me, a group of us. The example of if you ordered a package right now, and it was noon, from an online retailer, and they had an option for $20 extra, you could get it delivered at three in the morning or you could get it delivered by 7:30 in the morning tomorrow and not pay $20 extra, did the package that got there at three in the morning get there faster? Well, in an absolute sense, sure it did. I might argue based on you, the user, your experience, that 3:00 AM package did not get there faster because when you opened your door at eight in the morning, the 7:30 package would be there, or the three o'clock in the morning package would. That means user experience based, they were both delivered in the same way, but one cost you more money.
Secondly, you might even argue the way integrated apps behave now, that 3:00 AM is a worse delivery because someone came by install your package in the middle of the night off your front porch. So that's, again, just user-based delivery should predict design of the infrastructure and where we go. And the biggest data centers in the world are in places like middle of nowhere, North Carolina, Iowa, Mississippi, Pineville, Oregon, Quincy, Washington, all these remote places, Minnesota, Alabama. Those are the biggest ones from the biggest hyperscalers. And folks like us do lots of these other markets that interconnect the world and so forth. So, does latency matter? 100%, but not as fast as exceeding the delivery requirement of the application and user. That's why it matters.
Matt Pullen (31:06):
Great. That's really clear. So, we're effectively saying, because we're often asked this question, there will always be a need for data centers in urban and suburban areas, particularly as it relates to latency dependent applications. There's always going to be data centers that aren't hosting latency [inaudible 00:31:27] such latency dependent applications, and where they're located will still be a functional latency, but different latency requirements. Get it. That's fantastic.
Brian Doricko (31:38):
And if I might add to this, the latency question, it's often thought about on the wire time, what's it take to deliver a packet. But imagine a lot of these big cloud functions, these big compute engines from these biggest providers in the world of cloud services. If I was a big bank and my billionaire client wanted to buy, I don't know, some small European country, you would need a lot of computers to run really hard, really hot, really fast for three days to come up with a number, what's the price to buy that? The time on the wire when the request goes in, how much would it cost to buy this thing, that takes 10 milliseconds, 50 milliseconds, whatever. The computer's crunching and doing the analysis. If that takes three days, or solving a COVID problem, human genome, Carl Sagan, the universe, the compute time is way, way, way orders of magnitude higher than on the wire time, especially those applications.
Like I just said, you're just trying to get the output of one little number that says, "Here, that would be 5 billion to buy that." So again, that's user dependent. And why does hub and spoke matter in that? First off, because compute matters, and because the money you're spending to hub and spoke architectures are often sort of, like spending on the old phone network, if you can get more direct any to any encrypted tunnel technology, that leaves more dollars for compute and other things that add to the user experience. So, it's a lot going on, but it's not just on the wire.
Matt Pullen (33:21):
So stay staying on the topic, but I know it's something the audience are really interested to understand because some of the acronyms, particularly around the cloud are not necessarily easy to understand. So, could you just explain to the audience what an availability zone is in the context of the cloud?
Brian Doricko (33:42):
The folks that live in that world of rolling out availability zones, their definitions, I believe, start with the user and the service the user will need, not how big we have to build something. So, they start building out something that has a lot of computers, and that's their early stage availability zone. The reason the availability zones get bigger and bigger and bigger and bigger, and then a new one starts well later has to do with the physics, something called Amdahl's law, how compute works. When you aggregate a lot of computers, you can get exponential output. Think super computer. Why do you push all those things together? So, the AZ zones getting bigger and bigger and bigger, so there's not 10 million of them, there's five, 10, 20 of them is because aggregation of compute gives exponential output performance and so many applications. It's not on the wire that matters, it is the big compute firm.
But by the way, if we're really talking about user definition, application performance based, when someone locates new compute somewhere else, pick a small site or somewhere, that has to do with if the application is moving 10 petabytes a day. For some reason, it's aggregating some all sorts of information. That's where you don't want to send it far, far away to an availability zone because there's too much payload. So, that's when you locate in certain places. Why do we go somewhere? Because someone says, "I have this community and these applications that need to serve this particular function..." And they're so data heavy, why push them down a wire that's slow and expensive. Instead, locate them more locally. So whether you go to those big availability zones or when you're doing more of the local smaller edge, whatever people call definitions, those are just user and application based. It's not technology based, but economics and technology are guided to where you show up, sure.
Matt Pullen (35:50):
Yeah, so just digging into that a little bit more, just so I understand and the audience can understand, so take a London or take a North Virginia, you'll see each cloud company probably deploying three availability zones. I think what you're saying is there'll be an aggregation of data centers within each availability zone and the multiple availability zones themselves are actually just providing resilience redundancy. Is that correct?
Brian Doricko (36:23):
Very well said. Availability zones is the name of the thing you're asking about. Yeah, I want two sites, redundancy. So if I have something that is so critical to me, I can't afford for a physical reason, technical reason that this AZ went away, I need to have it operate somewhere else. The old days, the mainframes would connect to a Sunguard recovery center. Sunguard changed their name to Availability. The industry has changed to high availability, which means okay, we've got this big cluster of compute. That's started our AZ. Now, more and more mission criticals coming in, you need to have a second of that thing mirrored essentially so that you never are offline, because all of a sudden now... We've all seen on TV, when a airline system goes down or something, a bunch of people cannot get home across the world because the technology. So yeah, it's for redundancy, absolutely.
Matt Pullen (37:20):
And then to be true to Amdahl's law within an availability zone, I mean, surely it varies between the cloud companies, but how far apart can the data centers be within an availability zone to achieve this aggregated compute benefit?
Brian Doricko (37:36):
If you were to look to the biggest compute cluster farms, cloud deployments in the world... It's an economic question you're asking as much as a technical question. So before I answer your question, I first have you contemplate the reason to have a centralized computer to jam as many of these CPUs, 50,000 of them into a box that is no bigger than 10 feet by 10 feet. Why does that have to happen? Well, this compute function where you get exponential returns, it is dependent on nanosecond latencies, not microseconds, not milliseconds to get the exponential performance. The availability zones have that same thing, so you need to be close. Not like networks, we're talking about 50 milliseconds, 20 milliseconds, but nano is the increment they work in to get this performance boost.
It's somewhere... I'm not an expert in this, but if you look at the biggest cloud folks and where they live, five miles, four miles, three miles, something like that. Beyond that, you lose the compute advantage, and that's where you'll start another availability zone. So if you get too big, now your compute is too far and you don't get the exponential performance that drives what dramatic reduction in cost. So, there's not an exact answer, Matt, but two miles, five miles, something like that.
Matt Pullen (39:06):
But that's great. It just means the audience can understand why we see these clusters within availability zones and multiple availability zones. But Brian, that was great. The last area I'm going to focus on is really how you predict what's going to go on in the next five to 10 years, because you've said some really interesting things thus far. I think the biggest thing you've said is really this any to any situation where we are going to see network topography change, and arguably challenge the need to move data through the traditional interconnect points. But beyond that, what is your prediction? What's going to happen in this market over the next five and 10 years?
Brian Doricko (39:53):
The more we get into defining where things go based on user requirements, based on application requirements, the more we're going to see scale matter in these massive campuses like Webuild. The speed of lights 186,000 miles per second. So in fiber, it's something like 125,000 miles per second. It degrades about 30%. That means that in one millisecond, a packet can go down a wire 125 miles. If we start thinking, and that's what does the application require? What I just said is like, wow, I can go 125 miles in one millisecond and my tolerance was 20 milliseconds, 10,40, that opens up sort of more where the biggest companies in the world have put their data centers, in the middle of nowhere because they know that math and they know what Einstein said. And yeah, at the very dark edge of E equals MC squared, it breaks, but it's pretty good.
It's sort of accepted, through tests and all that, that you can go anywhere. So as our strategy go on digital gateway markets and scale, scale, scale, scale drives economic advantage, lower cost means higher adoption for the community. The kid that needs telemedicine in rural Kansas that there's not a hospital for 300 miles can get that service delivered much cheaper and efficiently. So, I guess the whole world benefits from that. Lower cost, higher adoption, better performance, performance as defined by the user requirement, not based on just fastest, that example of package delivery, et cetera. And again, there are some apps that require fast [inaudible 00:41:47].
Matt Pullen (41:49):
Great. So effectively you're saying that the market growth is going to continue, and effectively what's going to happen is continued aggregation at scale in diverse locations, which is amazing.
Brian Doricko (42:05):
Matt Pullen (42:06):
So, I know it's, again, back to acronyms and something the audience would love to understand. In all of that, how does the edge play? We hear about the edge. What does the edge look like in five to 10 years in the context of our industry?
Brian Doricko (42:23):
We're several years in, and this is the year of edge. Couple of the edge box providers have rolled up their carpets. The deployments haven't been as wide as people thought. What seems like's going to happen, and this is playing out, but the edge may not look like what we've thought of a bunch of 500 KW data centers, a bunch of places. The edge might look a lot more like compute a server or two at every single 5G tower across the fruited plane. The edge might look like a bunch of compute, and a bunch, a handful at aggregation points in cities where 5G behaves, for instance. We haven't yet seen a lot of, I need my actual compute function to be wherever. Now, then there's going to be some things that require compute locally for regulatory reasons, or someone decides they have something so secret and secure, they want their compute inside their market. I'm in one of the tier two, tier three markets.
And I don't necessarily call that edge because that is a compute function. Again, application requirement to be located there. So, the edge inside most of these smaller data centers in tier two, tier three, tier four markets, it's a whole bunch of fiber. And you know where it's pointed? Back to the biggest data center places in the world. So again, enabling services... My Google maps, where do I get that when I'm driving around? Because there was talk of that function's going to require all these edge data centers. Well, no, I've got that for years out of a central mapping function from Google. How about... Lots of times it's thrown out self-driving cars. That is the killer for edge. I might argue it's the opposite, and perhaps the worst example. Why? Today if your car is less than four years old, when your car backs up in a parking lot, if you get close to someone's bumper, it beeps already.
So the function of protecting the car from contact, it's already on the car. So, now let's let that car go out on the open road and be controlled by compute. The idea that someone's a hundred feet ahead of your car, you're going 60 miles an hour, and that car is swerving around, the idea that your car will send a packet to some edge data center to make a decision whether or not you should slow down when we already know that technology lives on the bumper of the car today, I think it's perhaps not a real good why the edge. And certainly, we're not going to send that packet back to some central location. But when you plug your car in at night, I'm pretty sure we're going to want to upload all the data of the behavior of that car. But that's not at the edge.
We know the more data you have, put it in a big place in one of these huge compute farms. Locally, going back and forth saying, "Should I slow down? This guy swerving in front of me?" That's right on the bumper of the car. We'll upload it when you park at night at home. So, I don't know the edge as some people have contemplated. But as more and more and more fiber access... And this is also a pretty good idea for... We all know we all went home and things worked pretty good. We locked down, pandemic, all that schools and so forth, and all these applications or companies are serving stuff out. And it's all doing pretty well, but we've all been on a few of these calls. And video's on, and then some people turn the video off because it's jittery and so forth, and some people are staticy and so forth.
And you find out that's someone that lives two and a half hours out of a major metro... We had someone that lives quite a bit away from our headquarters here in Dallas that was on dial up till six months ago. So, her getting on these video calls, it was a disaster. So, the edge, I think, is more back to 1980s, 1990s, 2000 of last mile matters. Like what? Yes, because we know we've got bigger and bigger fiber highways going everywhere. We know the biggest cloud people are becoming the biggest telcos in the world. And so now, last mile bandwidth, can I get my data up and down in less than a certain a month? The reason that remote person, the data didn't get through wasn't because of latency, it was because of throughput. There wasn't enough lanes in the highway to serve this new type of application we're dealing with. So, I think there's going to be a whole bunch of things that look like Edge. They're going to be a bit more like I need to be in my corporate campus. I'm developing something for the government. I need the data right there.
And a whole bunch of these aggregation, these network access places to deal with what more and more need to deploy fiber everywhere, anytime, anywhere so that we don't have that problem I just talked about. So, almost like the last mile argument again, which was 30 years ago, full circle.
Matt Pullen (47:29):
Brian, that's fantastic. It's been a topic that I've wrestled with. And just to finish off, every time I talk to you and listen to what you got to say, I just learn a ton. And I know the audience will appreciate the way that you put really quite complex questions and answers into really, really simple practical analogies that will help the audience understand where this incredible market is and where it's going. So Brian, I'd just like to thank you for your time today and for helping us all to understand better the industry.
Brian Doricko (48:05):
Matt, thank you very much. I enjoyed it a ton. And I'd just remind everyone in closing, every situation's unique, so start with the application and user requirement, and you're going to find yourself in the right place. Thanks a bunch.
Matt Pullen (48:18):
Yeah, thanks, Brian. Thanks for joining us.
A huge thank you to both Eric and Brian for joining me today and sharing such interesting insight and perspective. I thoroughly enjoyed that conversation and hope you did too. Please tell us what you think about today's episode by getting in touch on social and using the hashtag CyrusOneConnects. I hope you can join us next time where we'll be discussing all things sustainability and how our industry is reacting. Until next time.