It wasn't long ago that gigabit-class Ethernet sold for $500 per port. And now the feature is built onto most motherboards as a value-add. That's great news for businesses that really do need the network throughput and don't have a ton of budget to spend on infrastructure. But in the face of more demanding workloads, SBMs are increasingly running into the limits of gigabit-class hardware.
We're not trying to sensationalize here. In most environments, gigabit connectivity is still ample for systems serving small businesses. But networked storage and virtualization—two technologies formerly popular in the enterprise space, but increasingly important to smaller companies—easily push gigabit Ethernet beyond its ceiling. Fortunately, there are ways resellers can help their customers circumvent debilitating bottlenecks.
The easiest and least expensive approach is referred to as teaming, and involves combining physical Ethernet links into logical connections through bonding. This lets you scale up network throughput linearly using the infrastructure your customer already has in place, likely based on Cat5e cabling. Adding a quad-port gigabit controller like Intel's Ethernet Server Adapter I340 to a high-demand server more than doubles its peak bandwidth, simultaneously enabling better performance and the flexibility to dedicate individual links to management software.
While link aggregation is an attractive option for its interoperability with the network hardware SMBs are already using, you also have to consider the increase in cable complexity introduced when you start augmenting performance via teaming. At a certain point, it makes more sense to simply take that order of magnitude jump and transition from gigabit Ethernet to a 10 gigabit solution. The benefit is easy to see. One Cat6a cable consolidates what would have previously required two onboard controllers, two quad-port NICs, and 10 Cat5e cables coming from the back of your servers.
An add-in card like Intel's Ethernet Server Adapter X520-T2 gives you two 10 gigabit ports, which is more than enough for dual- and quad-socket servers hosting multiple virtual machines and performance-oriented storage systems loaded up with centralized data. The card drops into an eight-lane PCI Express 2.0 slot, facilitating enough bandwidth to handle the card's fantastic capacity for Ethernet traffic.
Ten gigabit Ethernet is a tremendous opportunity for the channel right now. Your in-depth expertise adds value to this margin-rich transition, giving customers solid long-term return on their investment. How do we figure? Well, most modern SMB networks center on gigabit-class hardware and Cat5e cabling. That'll carry 10 gigabit traffic about 40 meters. Stepping up to Cat6 extends that distance to 55 meters or so. Building a network on Cat6a yields the spec's full 100 meter rating. And of course, you can roll out in stages, upgrading cabling first and maintaining the gigabit-class hardware. Once your customer is ready to go 10 gigabit, deploying the other components is easy. The Intel adapter is one variable, and 10GBASE-T switches are the other. Solutions from Cisco, Brocade, Arista Networks, BNT, HP, Juniper, Extreme Networks, Force 10, and Dell give you plenty of selection for right-sizing.
Businesses that resisted the shift from 10/100 to gigabit Ethernet discovered today's software has little mercy on old networking equipment. The move to 10 gigabit promises to be just as necessary. If your customers are exploring virtualization and networked storage, make sure they're also armed with the 10 gigabit infrastructure to complement both of those game-changing technologies.
It's easy to get excited about today’s storage market. New technologies are springing up on a seemingly daily basis. More than ever, it's possible to build incredibly granular solutions that blend just the right amount of performance with a perfect mix of capacity at an ideal price point for nearly any customer. While we covered the basics of SAS, SATA, RAID, and servers in the past two issues, now we're turning our focus to some of the newer developments in storage—gems you don't want to overlook.
Five years ago, could you have guessed the storage market would be as diverse as it is today? Sure, SATA was making its slow transition from first-generation 1.5 Gb/s transfer rates to 3 Gb/s on the desktop. But Serial Attached SCSI was still spinning its wheels, trying to gain traction. After all, enterprise-class technologies never shoot out of the gate. It takes time for them to pick up steam, as risk-adverse businesses weigh potential performance benefits, validate compatibility with existing systems, and calculate the costs of adoption.
Now look at us. We’re on to the second generation of SAS sporting 6 Gb/s of throughput per port, offering cross-compatibility with SATA disks. We have storage devices tailor-fit for every workload you can imagine, from I/O-driven databases requiring instant access to long-term storage served by business-hardened high-capacity drives.
There’s just this incredible diversity available to your customers. It used to be, “Would you like that in IDE or SCSI?” You had to choose between affordable with questionable long-term viability or the prohibitively expensive hardware businesses really needed. Now, it’s SAS hardware in a SAS infrastructure. Or, you can add SATA, because the desktop technology is interoperable with the enterprise stuff. Add a single solid state drive able to tax a single 3 Gb/s port. And if that’s not enough, drop a PCI Express card with NAND flash onboard straight into a server, pushing hundreds of thousands of IOPS. Performance that was unfathomable a couple of years back is now reality—and it’s affordable enough for SMBs to access.
So, what is there to add to the storage story we’ve been telling over the past six months? Plenty. As we circle back with the most prolific engineers working on improving storage, the continued innovation in this field becomes clearer. We tracked down a few more gems for your own portfolio with the potential to do great things for customers .
More than two years have passed since Intel first launched Nehalem, a revolutionary architecture that crushed a number of performance bottlenecks and served as the foundation for several desktop-, mobile-, and sever-oriented products. Since then, we’ve seen Intel make a graceful shift from 45 nm to 32 nm manufacturing. We’ve seen the company add cores, cache, and clock speed. And we’ve seen Nehalem learn new tricks, like acceleration for AES encryption. Now it’s time for Nehalem to make way for an encore that Intel internally refers to as Sandy Bridge.
The products that center on Intel’s now-mature Nehalem design hardly need an introduction. Resellers who’ve enjoyed success over the past couple of years undoubtedly owe some of that prosperity to the compelling performance, impressive power consumption, and innovative value-adders like Hyper-Threading and Turbo Boost found on Intel’s Core i3, Core i5, Core i7, and Xeon processors.
Although the underlying processor design remains constant throughout those Nehalem-based CPUs, Intel selectively turns the dial up and down on a number of variables that help define performance, thermal profile, and cost. What results are consistently fast platforms, some of which offer more memory bandwidth, support for multiple CPU sockets, on-package graphics, integrated PCI Express control, and so on. Expect Intel to employ a similar strategy with its next- and now current-generation architecture, commonly denoted by its code name Sandy Bridge.
Once upon a time, resellers pinned their ambitions on the home theater PC—a second system in every home that’d bridge the gap between the office and the living room. The challenge was two-fold. First, there was the issue of hardware able to blend in with more traditional A/V components. And then came the software conundrum. How do you adapt the user interface of applications designed to be used two feet away from a monitor to work just as well 10 feet away from a high-definition TV?
The home theater PC concept seems to have lost some of its luster, hasn’t it? Between hardware components that use more power, digital rights management issues that stand in the way of getting content from one place to another, and the seemingly inherent relationship between PCs and desktop peripherals like mice, it’s hard to imagine an HTPC that truly revolutionizes this sputtering segment.
But get ready for all of that to change. Intel’s Sandy Bridge-based processors are breathing new life into the hardware side of this equation. Software developers optimizing for the architecture are taking strides to improve usability. Now, it’s up to the channel to take both pieces, put them together, and sprinkle a little innovation on top to make home theater PCs a viable business once again.