HP, HDS Get New Storage Chiefs

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

HP and Hitachi Data Systems played musical chairs with their top storage officials over the long weekend, and HDS followed it up today with a new archiving platform.

HP has named HDS CEO David Roberson as senior vice president and general manager of its enterprise storage business, effective May 30, a move that could make the two companies even closer.

HDS responded by naming Minoru Kosuge CEO and promoting Jack Domme, executive vice president of Global Solutions Strategy and Development, to chief operating officer.

Roberson, a 26-year HDS veteran, will take over HP StorageWorks from Bob Schultz, who has continued to run HP StorageWorks since his January appointment to senior vice president and general manager of the newly formed HP Enterprise Server and Storage Software organization. Roberson will report to Scott Stallard, senior vice president and general manager of HP Enterprise Servers and Storage.

HDS wished Roberson well and thanked him for his service, and noted that because of HP and Hitachi’s long-standing partnership — HP OEMs Hitachi’s Universal Storage Platform and Network Storage Controller — the move will deepen ties between the two companies. HDS also stressed that it remains well positioned with its new leadership.

Kosuge is the chief architect of the Hitachi Universal Storage Platform, which just received a major upgrade last week. HDS also noted Domme’s leadership in a number of areas, including storage management software, high-performance file systems and content archiving.

It is in that last area that HDS announced its newest offering today, significantly upgrading the digital archiving appliance it co-created with Archivas, adding diskless storage, de-duplicationand new security features.

HCAP version 2.0 provides an archive tier of storage where aged data on primary storage can be stored for specific periods of time to meet corporate and federal record-retention regulations.

The product competes with EMC’s Centera and IBM’s DDR 550 machines in the race to command the market for preserving unstructured data, such as files, images and Web content.

Actually, Asim Zaheer, senior director of business development for content archiving at HDS, said HCAP V 2.0 will outperform those machines, thanks to enhancements to the box’s scalability, capacity and speed.

For example, HCAP V 2.0 holds up to 20 petabytes in an 80-node archive system, supporting as many as 32 billion objects. Moreover, Zaheer said performance can be up to 500 percent greater than the first HCAP platform and the competing products from EMC and IBM.

HDS HCAP
HCAP for the archive.
Source: HDS

HCAP V 2.0 comes in two versions: as an integrated appliance with HDS’ WMS100 storage array or as a diskless version (HCAP-DL) in which the storage has been disaggregated from the server to let customers choose from among all of HDS’ major storage systems, including WMS100, AMS200/500/1000, USP V, or NSC55 systems, to match the right performance.

The diskless version of HCAP V 2.0 reduces the number of server nodes required, which means less heat emission and power consumption. Zaheer said this is a major departure from rival systems, whose archive systems include a server with storage embedded in the server.

To safeguard customers’ data, HDS is also introducing new encryption that allows a customer to store their security keys in the HCAP V 2.0 and “secretly share” that key across multiple nodes within the archive. So, rather than store the whole key in one of the devices, it’s distributed in pieces across all of the nodes within the archive.

This means a user would need all of the nodes, or devices, within a computer system to decrypt the content. So, Zaheer said, if a server or storage device is stolen from the cluster, the device would be unreadable by any other device. Most digital archive systems use key management as layered applications that sit unprotected outside the system, he added.

Moreover, HCAP V 2.0 now employs full object replication (file, metadata and policies), using digital signatures to ensure authenticity, along with data compression for saving bandwidth and encryption of data at rest.

Another new perk for HCAP V 2.0 is the addition of data de-duplication, which eliminates redundant data because only a unique instance of the data is retained on a disk or tape.

To do this, HCAP V 2.0 provides both a hash comparison and binary comparison to ensure objects are duplicates. This avoids so-called “hash collisions,” where different objects could have the same cryptographic hash key. Customers will be able to see how many duplicates were eliminated and the amount of total storage capacity saved.

The HCAP 2.0 comes on the heels of the Universal Storage Platform V, a heavily virtualized storage array designed to help the largest companies in the world store their digital content.

HDS believes HCAP 2.0, used in conjunction with a USP V, can form a formidable archiving and storage powerhouse for big businesses.

For example, as part of a virtual pool with the USP V, data in the archive can be offloaded from expensive disk to less expensive ATA or SATA.

HCAP V 2.0 is priced between $10 to $14 per gigabyte, contingent on the storage platform, disk size and performance optimization. But HDS said an entry-level, 5-terabyte system runs around $70,000.

Clint Boulton is managing editor of InternetNews.com, and Paul Shread is managing editor of Enterprise Storage Forum.

Clint Boulton
Clint Boulton
Clint Boulton is an Enterprise Storage Forum contributor and a senior writer for CIO.com covering IT leadership, the CIO role, and digital transformation.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.