AT&T's adoption of open, interoperable technologies is a foundation of the communication service provider's end-goal: Use automation to accelerate deployment and management of network and business services.
To advance toward this goal, in March the CSP conducted live field trials of a multi-vendor open source white box switch carrying customer traffic; in other words, AT&T used a uniform network operating system across a variety of merchant silicon chips that delivered high-performance telemetry into AT&T ECOMP. The switch monitored customer's traffic as it traveled from Washington, D.C., to San Francisco. AT&T worked with vendor partners including Barefoot Networks, Broadcom, Delta Electronics, Edgecore Networks, Intel and SnapRoute.
But network switches are only a first step, said Chris Rice, senior vice president, network architecture and design at AT&T. As the CSP continuously seeks ways to reduce costs, spur innovation and increase network capacity, it's experimenting with new white-box initiatives, expansion of its virtualization and software-defined network efforts and methods to integrate data, analytics and automation.
Rice spoke on Friday with Alison Diana, UBB2020 editor, about AT&T's long-term goals, open source and interoperability and how the CSP leverages webscale companies' lessons (and differences). Read on for an edited transcript of the conversation:
UBB2020: Could you tell us a little more about the white box network switch tests and why they are so important to AT&T?
Unfortunately I can't get into a lot of details but we have three vectors that are specifically kicked off around the different white-box initiatives and we'll be talking about them soon here. One is around the ONAP router
and one is more of an edge case. We'll get into more depth about them later as we have more information. But the details we got out of the [switch] trial was enough to make it look beneficial to us to take it to the next steps, which is how do we scale white box?
UBB2020: Is AT&T's motivation different from what drove webscale companies' open-source efforts?
CR: If you look at the webscale guys and why they did some of this initially, I really don't believe it had to do so much with them building a better mousetrap or building a better box; I think it had more to do with the fact they needed, wanted, had to have open interfaces on those boxes. And the reason that's so important is that open interfaces lead to data that you need to collect; data that you collect and want drives insights, and particular areas of insights then drive you to using those insights for automation, and automation reduces your overall cost of delivering an infrastructure and allows you to do things like machine learning and other capabilities on top of it. You can't do any of those things, if you follow it like a chain, until you have that first link which is open interfaces. There's really no downside. I can essentially buy it the way I'm buying it today but I just get those open interfaces.
UBB2020: How do your initiatives compare with those developed by webscale companies like Amazon?
CR: There are certain things we can learn from. Sitting in my seat, I see that they have certain advantages in the sense that most of their traffic is primarily from one of their data centers to another one of their data centers. Most of their work is done on applications they have built rather than bought from vendors, so they can actually write the application in a way to be more cognizant of the network that I can't just go force 1,000 Fortune 1000 companies that we have on our network to go do. That's different. The fact now merchant silicon builds to that market is now something I can leverage and use for my needs. That's a big plus -- so thank you for that.
But how we go off and build the switches, and the way we build the switches and the protocols we use, the way we have to deliver services amongst a wide variety of customers that don't necessarily have application-aware services that are riding on top of our network is different, is unique. We got some benefit from the silicon area, but there are unique needs we have that force us to do things differently than they did, similar to the way we built the network cloud which is different than their public cloud which is capable of running network workloads. There were some key learnings we were able to leverage, there was some key ecosystem development that they did, but there's more we still have to do for it to be beneficial to us.
UBB2020: Can you please talk a bit more about the importance of automation?
CR: The bigger picture here, kind of the meta point -- the forest for the trees -- is this is really about opening up interfaces that lead to automation. Now, there are other benefits and there are other side things associated with this, but as we move down an automation path, as we move down a machine-learning path to drive more automation, this is really a necessary first step -- these open interfaces that cannot be skipped over or overlooked. I don't know that people understand the significance of that. Maybe they do, and I'm making a bigger point of it than I should, but that's a really important point.
UBB2020: And this extends beyond network automation, right?
CR: Right. It is technology automation like you said, but it could also be service automation because I'm collecting data that services are run over. How can I personalize it better for our customers? How do I make sure the network operates more seamlessly? All of those issues. Getting access to the data to be able to do that in a way that is open and common, already there, as opposed to a big effort to make that happen -- or a big, costly, time-consuming effort to make that happen -- that's a really important point. If you look at the webscale guys, their business model is built on that data. They build that in on Day One.
UBB2020: So that requires elimination of data silos, analytics solutions and so forth
CR: We've been doing that for a while. We created a big data organization specifically about four years ago. They found quite a bit of business benefit associated with that. This is more about taking it to that next level and making sure that services are built in a way that are data-driven and our infrastructure is data-driven to be able to get the automation that we need to be able to get to. That's really the core of what Indigo is: It takes what we're doing in SDN, what we're doing in other areas and then driving it to make it more data-driven.
UBB2020: Changing gears a big, how big a deal is AT&T's ability to use merchant silicon for switches?
CR: That's part of a general trend that bears more thought. I used to have a boss in AT&T Bell Labs who used to say, 'Yesterday's systems are tomorrow's chipsets.' It's like the old NASA thing. You take a look at the NASA computer and how big it was and how much it could do, and the iPhone can do about five-times more than it could or something like that. It's something that happens in time. Part of it is the thank you I gave to the webscale folks, which is they drove the merchant silicon folks to look at this because they see scale and they see benefit, they put more engineering resources on it. The fact that the silicon prices are dropping as well helps us get more transistors on a chip. They do more software in addition to silicon, so that makes it easier to use and onboard by a wider variety of ODMs [original design manufacturers]. That ecosystem development is really important for this to be viable and reliable, long-term.
I think the big advantage is the more functionality and the more software you put on a chip, rather than on systems or glue-logic that attaches to the chip, you're using that base capability in the chip rather than auxiliary components that just absorb more power. And so you get a twofer effect: As integration costs and system on a chip drive gets bigger, you're doing that closer and more naturally on that chip so there are less external devices. So that's lower cost and the chip size itself is going down so you get more transistors on there as well. That's the twofer. That's the benefit you get: Lower power, smaller size from both of those factors.
In a blog, you wrote that telecom companies should "get comfortable" with the technologies powering their networks. What does that mean?
CR: John Donovan
said, for many years you might characterize the work we did as professional buyers. I think what he meant by that was that we had this really great ecosystem and we could shop where we needed to and we could get what we wanted, but even when we did that, we would still have to -- especially with early equipment -- help go through and make sure that it worked perfectly. In AT&T, probably more than many of our counterparts, we have technical people who can understand that and help that help make that work. My point was is that if you want to get some of the benefits of white box, you'll have to be more comfortable with doing more of the technical work itself. From my perspective, and it's obviously a self-serving thing to say, that's an advantage for AT&T because AT&T has a very technical organization that can do this sort of work and the fact that the ecosystem evolves to one that requires an ecosystem like that to get the benefit from it, is great news for us because, hey, we already have that. If you really want to take advantage of white box, you're just not going to be able to turn a button and get it to come in. If you look at the webscale guys, they have a large set of people who understand well the things that they build. So if you don't have that, you're going to have to get comfortable building a team who can do that.
UBB2020: Why can or should this all happen now?
CR: That's important. Really, there's a kind of confluence of different things that are coming together. If OCP [Open Compute Project] didn't exist, I wouldn't necessarily have reference design or a place to put reference design that service providers could leverage and I wouldn't necessarily have a set of ODMs to found a market to build those reference designs. If webscale folks hadn't got the scale they had and had some of the needs they had, maybe some of the chipsets that have been created wouldn't be in the market so I wouldn't have that.
Some of it existed and some of it was wrapped around it. And some people noticed there's an opportunity there and other people noticed there's an opportunity so they'd be attracted to it, but that's why I say it's emerging or burgeoning. You can start to see it forming. My question is, "Hey if this is going to start to form how do we make sure it forms in a way that is optimal for our business and how do we become a part of it?"
UBB2020: When will these open, interoperable technologies go live on AT&T's networks?
CR: Well, I mean, certainly within a year there'll be one or more that'll be fully operational.
UBB2020: How do you address the interwoven nature of technology to AT&T's business?
CR: We're a service organization that builds networks to benefit our position. I have very good relationships with a large number of my peers in the different business units. We make sure they understand the implications of the technology, they understand pluses and minuses associated with it, the benefits, the cost of doing it versus the cost of not doing it. Our goal is always to improve both our customers' experiences as well as the business that our business units do through technology. It goes very much hand-in-glove with that.
— Alison Diana, Editor, UBB2020. Follow us on Twitter @UBB2020 or @alisoncdiana.