Recognizing that more and more companies’ data center networks no longer necessarily end at the perimeters of their own data centers, Cisco is extending ACI, the Software Defined Network that was originally designed for Cisco switches, to the two biggest public clouds: Amazon Web Services and Microsoft Azure.

In conjunction with the Cisco Live conference in Barcelona Tuesday, the company announced “full integration” of ACI with Infrastructure-as-a-Service environments of AWS and Azure. Cisco had previously added ACI support for bare-metal clouds.

Enterprises today have many more infrastructure options for running their software than they used to. Besides their on-premises data centers, they increasingly run applications in the cloud and in colocation data centers, and there’s an anticipation that many of those applications will soon also run across highly distributed edge computing networks. In many cases, companies end up with infrastructures that mix all these options, and there’s been a big push by vendors to sell them ways to use these disparate computing environments in a uniform way.

Plus, the more software is deployed in the cloud, the fewer opportunities there are for vendors like Cisco to sell technology for internal enterprise data centers. The company’s answer has been to place more emphasis on software, and services, and extending ACI, or Application Centric Infrastructure, to the cloud is a step in pursuit of that strategy.

Companies should be free to make infrastructure decisions based on their economic requirements, Roland Acra, senior VP and general manager of Cisco’s Data Center Business Group, told DCK in an interview for the Data Center Podcast. “That (decision) should not have a technical consequence,” he said.

Another trend playing out here is the continuous disaggregation of various components of the network stack. Pushed forward in a big way by hyperscale cloud platforms like Google, Facebook, and Microsoft, it’s a move away from tightly integrated single-vendor solutions, where hardware and software are inseparable from each other.

Over the years, Cisco has gradually expanded the variety of hardware and software tech by other vendors (or open source tech) its own software supports. ACI started as an on-premises SDN that worked hand in hand with select Cisco Nexus hardware. The company later added ACI support for remote disaster-recovery locations, and then for distributed computing at the edge, at which point ACI also started supporting non-ACI switches, Acra said. Eventually, it was virtualized so that it could run as a set of VMs on any hypervisor, regardless of whether there was a piece of Cisco hardware in the mix at all.

Now, ACI is going from being hardware-agnostic to also being cloud-agnostic. “We wanted to have customers not have to learn multiple environments,” he said. “The key thing we focused on is to serve more than one cloud; customers want to avoid the vendor lock-in of any one cloud.”

Importantly, the strategy is to also support more than one workload deployment method. That means that in addition to working on bare metal, on any hypervisor, or on any cloud (well, on two clouds for now), ACI supports various container frameworks, such as Kubernetes and Red Hat’s OpenShift. The goal, really, is to write software that’s as platform-agnostic as possible.

“Things almost never go away, right? There’s still a mainframe somewhere; there’s still a few Sun servers doing, you know, Oracle database somewhere on Unix; there are a lot of hypervisor-based things … and the new cloud native applications – whether they go on-prem or not – are built on Kubernetes or other thing,” Acra said. “And those are all passengers on the same hardware, which is, you know, racks of servers or blades.” And you should expect your SDN “to be hypervisor-agnostic, to be container framework-agnostic, and to be cloud API-agnostic.”