^

Technology

Enterprise storage: Getting the accessibility-cost balance right

-

MANILA, Philippines - There’s no denying that data is a valuable resource in today’s globalized marketplace. Data’s value, however, can only be extracted if it is available to users when they want it. Organizations recognize this and put their operational data on online disk storage, from where it can be quickly retrieved. While the value of data tends to decrease over time, non-production data, too, need to be readily available for business intelligence, governance and compliance purposes.

Given the growing volumes of data that modern organizations create, store and manage, balancing the requirements of timely retrieval and the cost of storing data so that it is easily accessible can be a challenge.

To meet this challenge, more organizations are adopting storage tiering. In tiered storage, fresh, high-value data are stored on high-performance drives while older, less frequently accessed data are moved to high-capacity, lower-cost drives, thus keeping costs manageable without compromising accessibility.

Falling short

Most tiered storage solutions, however, do not integrate real-time intelligence. Instead, they employ an application - or “agent” - to determine whether data should be moved from one tier to another. The agent, which requires a distinct server, is run periodically or on demand. Because the agent does not collect information about data all the time, the information it collects is often outdated by the time the data are actually moved. This results in data being placed in the wrong tier and/or on the wrong disk type, affecting both storage cost and speed of access and retrieval.

Some tiered storage solutions also require pre-determined tiering allocations, which add to the storage administration burden. More importantly, pre-set allocations reduce the flexibility organizations need to optimize their environment as their needs change.

Sub-optimal write performance, which impacts accessibility, is another shortfall of tiered storage solutions within traditional storage architecture. In those solutions, data are written to a particular block and kept in that block. So if the block is migrated to, say, Tier 3 and a new write comes in for that volume, the write will occur on Tier 3. Manual intervention is needed to move such misplaced data to the appropriate storage tier and/or RAID level.

Smart, automated tiering

These shortfalls are non-existent in Dell Compellent Data Progression, the industry’s only proven automated tiered storage solution. The software maintains continual awareness about blocks of data and captures real-time use characteristics about each block.

These characteristics include information on when the blocks were created, which drives hold the blocks, the associated virtual volume, how frequently the blocks are accessed or changed, and whether the blocks represent actual data or virtual pointers to data. Using this information, Dell Compellent Data Progression then automatically migrates the blocks of data to the optimum storage tier and/or RAID level.

“Unlike other tiered storage solutions, Dell Compellent Data Progression does not require additional hardware or server-side agents to operate. It is fully automated and integrates into the storage layer. The solution manages data at a very granular level, enabling administrators to increase storage efficiency and, in turn, reduce storage costs,” said Eric Kang, storage product solutions manager of Dell South Asia.

This fine granularity further distinguishes Dell Compellent Data Progression from other solutions. The Dell Compellent software moves data in 512 KB, 2 MB or 4 MB blocks, whereas other tiering solutions use large same-size blocks - called pages - ranging from 16 MB to 1 GB in size.

Besides being more efficient, the use of small pages increases storage cost-effectiveness as data are moved and placed with a greater level of precision within the tiered environment. It also speeds up access to and retrieval of data.

Just-right provisioning

Another way to lower storage costs is to provision only what’s needed when it’s needed. The practice is called thin provisioning. In the traditional storage-allocation model - so-called “fat” or “thick” provisioning - administrators estimate the capacity required for a given application and pre-allocate extra physical disk space to accommodate growth.

Other applications cannot use the pre-allocated disk space, and it cannot be reclaimed later. In many cases, only a fraction of the pre-allocated capacity is actually used, resulting in wasted storage - and high TCO.

Using thin provisioning, organizations can drive storage capacity utilization efficiency up with very little administrative overhead. Some thin provisioning solutions, however, require that an initial increment of physical storage be allocated when the thin-provisioned volume is created.

And although all thin provisioning implementations allow organizations to configure their own utilization thresholds, in its report “All Thin Provisioning is Not Created Equal: What You Need to Know About Thin Provisioning Implementations,” Gartner found that about two-thirds of implementations specify a maximum, generally around 95 percent.

Gartner also found that less than half of the thin provisioning implementations it examined can automatically reclaim provisioned space where blocks have been used and then erased.

Some storage systems require administrators to predefine RAID sets and later format additional space to accommodate volume expansion. To make changes with those solutions, administrators have to free up additional storage capacity and migrate data.

Others offer a form of thin provisioning, but the capability applies only to storage contained within a newly created virtualized storage pool.

Highest storage utilization possible

In contrast, Dell Compellent Dynamic Capacity software for thin provisioning allows administrators to expand or shrink volumes on demand without being bound to RAID set capacity or performance limitations.

“Dell Compellent Dynamic Capacity delivers the highest enterprise storage utilization possible by eliminating pre-allocated but unused capacity. The solution completely separates allocation from utilization, enabling organizations to provision any size volume upfront yet only consume disk space when data is written. This leaves unused disk space in the storage pool for other servers and applications, helping to create a flexible storage pool required for effective tiering. What’s more, disk space is reclaimed after files are deleted,” Kang said.

How Dell Compellent technology is helping Fisher College of Business

One organization that is benefiting from automated storage tiering and thin provisioning is the Ohio State University’s Fisher College of Business, which consistently ranks among the top business schools in the United States. The college implemented a Dell Compellent SAN in 2003 to provide the foundation for its IT infrastructure.

The college stores years of research, student files and e-mails, most of which are accessed infrequently. Before installing its SAN, IT staff members had to manually classify the school’s least-active data.

Since adding Dell Compellent Data Progression, Fisher has been able to automatically track data usage and migrate 40 percent of data to energy-efficient and economical SATA drives.

Since the Dell Compellent SAN was initially installed, its flexibility has allowed Fisher to support rapid data growth and technology demands without ripping and replacing the SAN once the school’s storage needs reach a certain threshold - the SAN can grow from one to hundreds of terabytes on a single platform.

The SAN Fisher purchased years ago has scaled capacity, connectivity and performance incrementally as the college’s needs change.

“We are doubling our storage space about every two years,” says Brian Wilson, director for technology at Fisher College of Business. “Compellent has made expansion more affordable and less painful.”

One key to the college’s scalability is Dell Compellent Dynamic Capacity, which allows the IT staff to quickly create any size volume and only consume disk space when data is actually written.

That difference has made over-purchasing and over-provisioning disk space a problem of the past at Fisher. Increased disk utilization has also enabled Fisher to reduce costs associated with powering and cooling excess disk space.

(Find out more about getting the storage and data management right at www.dellstorage.com or contact Charlotte Rogacion-Francisco, marketing senior manager, SADMG, at +632 7068024 or e-mail her at [email protected].)

CAPACITY

COMPELLENT

DATA

DELL

DELL COMPELLENT DATA PROGRESSION

DISK

PROVISIONING

SPACE

STORAGE

  • Latest
Latest
Latest
abtest
Are you sure you want to log out?
X
Login

Philstar.com is one of the most vibrant, opinionated, discerning communities of readers on cyberspace. With your meaningful insights, help shape the stories that can shape the country. Sign up now!

Get Updated:

Signup for the News Round now

FORGOT PASSWORD?
SIGN IN
or sign in with