NOTE: This post originally appeared here
Once upon a time, back in the dark ages of IT, people would sit around the table and talk about “the black box” and everyone knew what was meant—the network. The network, whether it was ARCNET, DECnet, Ethernet or even LANTastic, seemed inscrutable to them. They just connected their stuff to the wires and hoped for the best.
Back in those early days, conversations with us early network engineers often went something like this:
Them: “I think the network is slow.”
Us (who they considered pointy-hatted-wizards): “No, it’s not.”
Them: “Look, I checked the systems and they’re fine. I think it’s the network.”
Us: “Come one, it’s rarely ever the network.”
Them: “Well, I still think…”
Us: “Then you’ll need to prove it.”
It was so difficult for them to pierce this veil that—if urban legends on the subject are to be believed—the reason the image of a cloud is used to signify a network is because it was originally labeled by those outside the network with the acronym TAMO. This stood for, “then a miracle occurs,” and the cloud graphic reinforced the divine and un-knowable nature of bits flowing through the wire.
But we in the network knew it wasn’t a miracle, though it was still somewhat of a black box even to us—a closed system that took a certain type of input, implemented only somewhat monitorable processes inside and then produced a certain type of output.
With time, though, the network became much less of a black box to everyone. Devices, software and our general knowledge grew in sophistication so that we now have come to expect bandwidth metrics, packet error data, NetFlow conversations, deep packet inspection results, IPSLA and more to be available on demand and in near real-time.
But recently, two new black boxes have arrived on the scene. And this time, we net admins are on the outside with almost everyone else.
The first of these, virtualization—as well as its commoditized cousin, cloud computing, has grown to the point where the count of physical servers in medium-to-large companies is sometimes just a tenth of the overall server count.
Ask an application owner if he knows how many other VM’s are running on the same host and you’ll be met with a blank stare. Probe further by asking if he thinks a “noisy neighbor”—a VM on the same host that is consuming more resources than it should—is impacting his system and he’ll look at you conspiratorially and say, “Well, I sure think there’s one of those, but heck if I could prove it.”
Still, we love virtual environments. We love the cost savings, the convenience and the flexibility they afford our companies. But don’t fool yourself—unless we’re actually on the virtualization team, we don’t really understand them one bit.
Storage is the other “new” black box. It presents the same challenge as virtualization, but only worse. Disks are the building blocks of arrays, which are collected via a software layer into LUNs, which connect through a separate network “fabric” to be presented as data stores to the virtual layer or as contiguous disk resources to physical servers.
Ask that already paranoid application owner which actual physical disks his application is installed on and he’ll say you may as well ask him to point out a specific grain of sand on a beach.
Making the storage environment even more challenging is its blended nature. Virtualization, for all the complexity, is a binary choice. Your server is either virtualized or it’s not. Storage isn’t that clear cut—a physical server may have a single traditional platter-based disk for its system drive, connect to a SAN for a separate drive where software is installed and then use a local array of SSD drives to support high-performance database I/O.
OK, so what does all this have to do with the network? Well, what’s most interesting about these new black boxes—especially to us network folk—is how they are turning networking back into a black box as well.
Think about it—software-based “virtual” switches distribute bandwidth from VMs to multiple network connections.
Also, consider that SAN “fabric” is often more software than hardware.
And then there is the rise of SDN, a promising new technology to be sure, but one that still needs to have some of the rough edges smoothed away.
The good news is that, like our original, inscrutable networking from the good old days, the ongoing drive towards maturity and sophistication will crack the lid on these two new black boxes and reverse the slide of the network back into one as well.
Even now it’s possible to use the convergence of networking, virtualization and storage to connect more dots than ever before. Because of the seamless flow from the disk through the array, LUN, datastore, hypervisor and on up to the actual application, we’re able to show—with a tip of the old fedora to detective Dirk Gently—the interconnectedness of all things. With the right tools in hand, we can now show how an array that is experiencing latency affects just about anything.
That paranoid application owner might even stop using his “they’re out to get me” coffee mug.