The Future of Data Storage: FCoE, SSD Mergers, But No Clouds

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman As I wrote a year ago, the data storage market was going to be tough to predict this year because of the economy, and indeed, the economy did hold back some technological advances that might otherwise have occurred this year. With that as a caveat, we’ll look back on how my predictions for this year fared, and look ahead for what might be ahead for data storage and storage networking in 2010.

I had predicted that that at least one additional vendor would support T10 OSD file systems, which will allow better scalability over most current block-based file systems. I got this one somewhat right: Sun Microsystems (NASDAQ: JAVA) announced T10 OSD QFS for OpenSolaris, but then canceled the project in January and open-sourced OSD QFS. As of right now, no one I am aware of is offering T10 OSD-based file systems other than Panasas, so I had it right for at least one point in time.

I also predicted that Fibre Channel over Ethernet (FCoE) was coming. Well, it’s been hard to miss all the announcements from server, storage and networking vendors about FCoE products and plans. It’ll take a while for FCoE to fully materialize in the marketplace, but it’s well on its way.

Despite claims to the contrary, I predicted that the development of Fibre Channel will end with 8Gbps. There have been announcements about ratification of a 16Gb FC standard, but I still believe that 16Gb products are not going to really happen. There are a number of issues, including that most server vendors still do not have PCIe 2.0 in larger servers, but that will likely change in 2010. These servers have historically been the driving force for Fibre Channel. A full-duplex, single-port 8Gbit FC card requires 1600 MB/sec of PCIe bus bandwidth, while a dual-port card, which is far more cost effective, requires 3200 MB/sec of bandwidth. From a bandwidth perspective, this translates into 8 lanes of PCIe 1.1 bus or 4 lanes of PCIe 2.0 bus for the single port card, or 8 lanes of a PCIe 2.0 bus for dual port. The problem is that 16 Gbit dual-port cards will require 16 lanes of PCIe 2.0 support. I am aware of some blade vendors making 16 lane PCIe 2.0 buses, but I am not aware of any large servers that have 16 lane support. The reason is likely the architectural complexity and the memory interconnect, which raises the cost of multiple 16 lane buses. With PCIe 3.0 being late (more on this in a moment), I think the jury is still out. Because announcing plans and having a standard does not mean that products will exist, the jury is still out on this one.

Speaking of PCIe 3.0, I said that this new standard will likely be ratified late this year or early next. I predicted that performance wouldn’t double, as was the case with the move from PCIe 1.0 to PCIe 2.0; I expected something more along the lines of a 40 to 60 percent performance increase. I got this one wrong: PCIe 3.0 ran into some interoperability issues and was pushed back to the second quarter of 2010, and the expected performance is 1 GB/sec per lane, which will double performance.

I was right about the direction of hard disk drives this year: As I wrote late year, “Any increase in enterprise disk drive density will be in SASdrives, not FC drives. Densities will likely increase about 50 percent. It is possible that we might see a 2TB disk drive on the SATAside by the end of the year.” All correct.

As for RAID, I wrote that “A fair number of people in the research community, a few bloggers and some in the HPC community believe that RAID as we know it is a dead-end technology” (see RAID’s Days May Be Numbered). Despite all the interest in the issue, none of my predictions have so far come true.

I said last year that we would see some products from vendors that support the T10 Data Integrity Field standard end-to-end. You can purchase HBAs today that support this functionality, and disk drive vendors have released SAS drives that support this field, but we’re still waiting to hear from storage controller vendors.

For storage software, I said that ILM vendors would release products that address Sarbanes-Oxley compliance, HIPAA and e-discovery regulations, and indeed, many storage hardware and software vendors have released those types of products.

Another easy one was on file systems, where I said, “There will be nothing new on the file system front. It will be the same problems that we have today and the same problems that we have had for 20 years.” For more on this one, see File System Management Is Headed for Trouble. Another easy prediction was that there were going to be no changes with error management, and again I was right.

I predicted that POSIX changes for things like T10 DIF and ILM would start to be discussed given the huge limitations of POSIX. Well, other than myself, no one is really talking about this yet.

By my count, I got six right and four wrong, while the jury is still out on 16-gig Fibre Channel.

 

Storage Predictions for 2010 and Beyond

I’ll start with some easy predictions this time around.

 

  • FCoE will become available end-to-end, with major storage vendors supporting FCoE interfaces by the end of 2010. This should be an easy prediction, as the market is demanding FCoE given the potential cost and cabling savings. It takes longer for the design integration and testing of storage controllers, so it is no surprise that this takes longer than server or interconnect changes.
  • PCIe 3.0 will make it to market, with availability in blades first. This is almost a given at this point.
  • Multiple storage and server vendors will address the end-to-end data integrity problem. Now the predictions are getting tougher. There is definitely a market need given the documented problem of mis-corrected or undetected errors in the data path, and I believe that vendors will fill this market need in 2010. This will be the only major change in file system technology and the first major change in a long time.
  • There will be market consolidation in the flash solid state drive (SSD) market in 2010. The number of vendors in the market is just too large for the size of the market. Some companies will either merge, get bought out, or disappear. The market for flash disk drives is only so big and it cannot support the large number of companies in the market even with the growth of the technology. And STEC (NASDAQ: STEC) will finally get some competition in the enterprise SSD market from the likes of Pliant and Seagate (NASDAQ: STX).
  • Flash SSD usage will increase, with multiple RAID vendors and multiple controller card vendors providing better support, which means higher bandwidth and more IOPS in 2010.
  • 10 Gigabit Ethernet will become the standard connectivity for almost all systems. Higher-end home PCs from Dell, HP and others will support this technology. Home routers from multiple vendors will have this support, likely before the end of the year (see Enterprise Technologies Will Change the Consumer PC Market).
  • NFSv4.1 (pNFS) will enter the market with products from multiple vendors in 2010.
  • 40 and 100 GbE will continue their march to product availability, with the potential for some interconnect between switches available in 2010, but with certain availability in 2011. For 40 GbE to be viable for the host side, dual port, will require PCIe 3.0 with at least 16 lanes, and at that PCIe 3.0 provides only 16 GB/sec of bandwidth, while 20 GB/sec is needed for full rate, full duplex operation. PCIe 3.0 is a must for host side connectivity.
  • Last but not least: By the end of 2011, the cloud hype that you hear today will be greatly diminished. Clouds are good for some things, but just like the storage service providers and application server providers of the late 1990s or grid computing in the early part of this decade, clouds will meet a similar fate. SSPs and ASPs are providing services for some applications, but they are not going to solve all problems for all enterprises, as there is just not enough network bandwidth, the latencies for some applications are too high, and the security problem end-to-end has not be solved in a standard way (see Why Cloud Storage Use Could Be Limited in Enterprises). Besides, does anyone really see enterprises giving up control of their most critical data? I sure don’t.

Happy Holidays and best wishes for a prosperous 2010.

 

Henry Newman, CTO of Instrumental Inc. and a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.

Follow Enterprise Storage Forum on Twitter

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.