Cover Story: Storage Management
The CIO's mandate
Organizations are struggling to cope with data volumes that
are doubling year on year. If there is a magic bullet that can help control
this situation, it is storage policy say Akhtar Pasha and Abhinav Singh
Wipro Technologies has a set of policies
that let it give top priority to business critical processes such as SAP and
the employee portal. A combination of Veritas Backup Exec backup-retrieval software
and backup policies enables Wipro to recover lost data from its Chennai disaster
recovery site within four hours. Internal policies are used to take backup of
every senior manager's data on a daily basis.
Storage policies are a powerful tool in
the CIO's enterprise toolkit. An enterprise-wide storage management policy is
an important strategic tool that can bring about operational efficiency and
improve the performance of storage systems within the enterprise. Policy encompasses
both proactive and reactive functions, such as maintaining operations within
prescribed limits, or responding to events, or using rule-based policies for
taking backups when systems are not in use or ensuring business continuance
when retrieving lost data. The other reason for using policies is to be able
to change the way storage is used, without altering the physical topology of
the storage set-up. The goal in using policy-based management is to automate
processes and attain consistency in storage management. It all begins with the
separation of mission critical data from information that isn't critical to
the business. From this foundation, companies build proactive and reactive policies
accordingly, to formulate business continuity and disaster recovery plans.
Methods to identify mission critical data
The identification of mission critical
data varies from one business to another. So do business procedures addressed
by IT. A single database going down can affect enterprise business if the database
is a repository for all the information generated by the company's ERP system.
It is important to identify the business angle and the business need met by
a data store. The priority while retrieving data depends upon the cost of downtime
if that data store is unavailable.
P. S. Karthikeyan, the chief delivery officer
(CDO) at iGATE Global Solutions Ltd (IGS) formerly Mascot Systems says, "We
identify mission critical data based on asset classification that takes into
account various parameters such as confidentiality, integrity, and availability."
The values are then fed to an in-house tool, which prioritizes the asset in
terms of its criticality. An appropriate business continuity and disaster recovery
plan ensures data availability when things go wrong.
Mission critical data is classified according
to the monetary value attached to it. For instance, core banking, ERP, and trading
applications are of immense monetary value to an organization. Loss of data
generated by any of these applications will severely impact an organization's
financials. Tools such as storage resource management (SRM) can analyze data
generated by mission critical systems. By digging into a database system's space
allocation vis-à-vis available space, SRM software ensures that applications
don't take up more space than necessary.
Importance of enterprise-wide storage policies
Having a storage policy is critical to
the process of designing, deploying, and managing enterprise storage. An enterprise
storage policy helps to identify, design, and build a clear roadmap for designing
a storage architecture encompassing usage patterns, budgetary allocation, and
requirements of end-users. It also helps evolve a model to differentiate between
structured and unstructured data.
Developing a policy-based storage management
strategy is an effective technique for controlling costs and for making better
deployment decisions. As applications evolve or user needs change, there is
an associated change in the physical environment. The most obvious of these
parameters is the constant growth in disk space. What might start as a small
application requiring minimal space and low access rates often grows over time
into a high-volume application, eating up gigabytes or even terabytes of storage
requiring high-speed access. If an enterprise treats everything equally, it
may spend too much on low-demand applications or experience service
problems while running high-demand applications.
Every company needs some form of data storage
policy, regardless of its size. For small companies, this may simply involve
managing the lifecycle of documents. For a large enterprise, it will involve
making an assessment of how data is collected and stored. In general terms,
the larger the company, the greater the need for formalized policies. By enforcing
disk space limits on directories, shares and servers, a CIO can:
1. Control the amount of storage being
consumed by users.
2. Manage what data and file types users
store on servers and storage devices.
3. Create and maintain storage pools based
on Quality of Service (QoS) requirements. For instance, an administrator can
create a storage pool that consists of RAID or striped storage devices to meet
reliability requirements, and can create a storage pool that consists of random
or sequential access or low-latency storage devices to meet high-performance
4. Set storage quotas on users. An administrator
can set the default at X MB of space, and then assign additional storage depending
on how much each user needs. For example, some users may get by with 50 MB of
disk space while others get 100 MB depending upon the storage requirement of
that individual. Misuse of storage can be minimized, by setting quotas, clamping
a tight lid on MP3, video and games. Storage quotas help enforce QoS for storage.
There are many sources from where data
can be obtained, these need to be examined and included in an organization's
storage policies. It is easy for employees to download files from the Internet
that may end up lying unused over time. Instead of downloading a document, the
IT administrator can suggest storing only the URL. Some companies go further
and block employees from downloading files from the Internet.
Using policy-based storage management means
spending some time up front determining what services you will provide and how
those services will be applied for different requirements. You need to write
policies for managing your storage services based upon the requirements of the
applications and users. These service requirements become the criteria for selecting
software or hardware.
But what's the right data storage policy?
The answer is relatively simple. You know you have the right policy when you
are in control of the influx of data, your backups are happening in a manageable
timeframe and you are able to ensure that your employees have access to the
correct data in the right format when they need it. The pay off of adopting
a data storage policy is that it lets you stay on top of your growing storage
pool. If a storage policy isn't put in place, it will lead to a gradual slowdown
in the flow of information throughout the company. If left unchecked, this will
affect the health of the organization and its ability to respond in a timely
manner to changes in the environment that it operates in.
Effective backup and retrieval
Enterprises have their own way of backing
up and retrieving data. Karthikeyan of iGATE says, "We have a comprehensive
backup and retrieval policy in place. Backup of data for projects are identified
and backup taken on a daily, weekly and monthly basis. Veritas Backup Exec is
used for centralized backup.
Daily and weekly backups are stored at
remote locations at the Bangalore data center. Another copy of the monthly backup
is stored at our Chennai data center." Additional backup is provided for projects
that come with a special request from iGATE's clients. For closed projects,
iGATE maintains three copies of backup. One tape will be stored in a remote
location within the same city (Bangalore) and the other copy will be sent to
the Information Center in the same premises. The backups from Bangalore are
then sent to iGATE's Chennai data center. The tapes are marked for easy identification
and accessibility. The backup tapes are stored in a fireproof cabinet of international
standards located in an air-conditioned room to ensure quality and shelf life
of the tapes. The company has similar storage hardware set-ups at its Bangalore
and Chennai data centers. iGATE
doesn't use its WAN link for remote data replication.
Texas Instruments has a backup policy of
taking a backup of its entire chip design data on to a Quantum ATL tape library
every single day. As a rule the company keeps 14 days of data on site after
which it is manually removed to a safe location. The cartridges are indexed
using bar coding for identification.
Akhtar Pasha can be reached at Akhtar@expresscomputeronline.com
Storage policies are vital to control the information explosion that's
a fact of life in the Indian enterprise today. But what is a storage policy?
Simply put, a storage policy is a set of procedures that are implemented
to control and manage data within an organization. These range from policies
that determine how data is collected and stored, to a set of applications
that manage all aspects of data storage.
A good starting point for a data storage policy is to determine how
the data should be stored. Should it be stored online, near-line, or offline?
Effective archiving can dramatically reduce the size of daily backups.
Data that needs to be edited or updated regularly, volatile data should
be stored online on devices that are part of the normal storage infrastructure.
This can be Direct Attached Storage (DAS), Storage Area Network (SAN),
or Network Attached Storage (NAS). Data that needs to be accessed but
not updated, read-only material can be stored near-line on disk or tape.
Finally, data that is unlikely to be required but needs to be kept for
legal reasons, for example financial information, can be stored offline
on tape and kept either on or off-site.
As a guide to developing a data storage policy the Butler Group suggest
five steps. These are:
- Establish a data storage budget
- Assess data availability requirements
- Measure security levels
- Assess legal and governmental requirements
- Implement a data policy across the corporation
It is now widely accepted that the massive growth in data storage is
making it increasingly difficult for administrators to conduct routine
maintenance tasks such as backups and disaster recovery provisioning.
Enterprises are looking at automated policy-based backup to reduce human
intervention and replicate data without bringing down applications.
Business Continuity Plan (BCP) is defined as the process of developing
arrangements and procedures that help an organization respond to an event
in such a manner that critical business functions continue without interruption
or significant change. The whole idea centers on one pointbackup
your data to a remote location so that if one location goes down, the
remote location can take over with a minimum, unavoidable amount of disruption.
BCP is done when an enterprise has consolidated its data storage to a
single location. Avijit Basu, marketing manager, NSSO, HP India says,
"If you have consolidated your data at a central location then it
becomes a lot easier in terms of maintenance and administration of systems.
This is the time to go in for BCP."
Let us look at how prepared Indian companies are when it comes to having
a BCP and DR plan. Large companies such as Citibank, ICICI Bank, HDFC
bank, National Stock Exchange (NSE), ONGC, BPCL, and Tata Teleservices
Ltd have full-fledged BCP and DR sites. If we look at the survey done
by KPMG in 2002 on the preparedness of Indian industry in terms of having
a business continuity plan (BCP), the majority of Indian companies including
those in the IT and telecommunication sector do not have any BCP plan
in place. Still, BFSI, oil and gas, and IT services companies are adopting
BCP. Texas Instruments (India) is in the process of formulating BCP, it
already has a proper backup and recovery plan in place with backups and
replicated catalogues kept offsite. Frost & Sullivan estimates the
Indian business continuity market at $30 million in 2002 accounting for
roughly 7.5 percent of the Asia Pacific market that was valued at $410
million. IDC estimates that China and India will be the fastest-growing
market for disaster recovery services in Asia-Pacific excluding Japan.
Sunil Rangreji, General Manager, global IT infrastructure at Wipro Technologies
says, "We do an impact analysis as to what impact a certain amount
of data will have on the organization in case it is lost. Our SAP applications
have 99 percent plus priority."
Wipro Technologies has a DR center in Chennai, which replicates all
the data being processed by any of Wipro's facilities across the world.
In case of a major disaster, the Chennai DR center can run the whole system
within four hours of a strategic center going down.
"Every six months we have a DR restoration exercise where for one
whole week all the systems and processes of Wipro are shut down across
the world and are run from our DR center at Chennai," says Rangreji
adding, "Our business continuance parameters include how to optimise
storage and solve administrative overheads involved in business continuance
and to effectively consolidate backup at one central location."
CIOs have their own way of calculating ROI (Return on Investment) on
network storage. Some look at it in terms of pure financial benefits while
others calculate ROI on the basis of intangible benefits. Traditional
methods of calculating ROI such as Internal Rate of Return (IRR), Pay-back
analysis, Net Present Value (NPV) and Economic Value Added (EVA) have
been in use for quite awhile for assessing real value of storage. That
said, there are shortcomings of such practices when they are applied by
large enterprises such as telcos because of the huge project sizes involved.
problem with relying solely upon financial techniques such as NPV or IRR
is that they don't necessarily capture all of the business benefits of
an IT investment, nor do they help to evaluate all of the options that
are open to enterprises. A simpler method of calculating ROI makes sense
in such casesits called payback. Here CIOs rely entirely on the
intangible benefits gained from an IT implementation instead of calculating
ROI in pure money terms.
Kameswar V Nagavarapu, general manager-IT, TI India Pvt Ltd says, "Calculating
storage ROI in financial terms will be difficult. Being a chip design
company, if I'm able to provide 100 percent uptime to my chip designers,
that will be my ROI."
Another way of calculating ROI would be from the total cost of ownership
(TCO) angle. A CIO's first step will be to analyze his or her business
goals, what the organization wants to achieve and the kind of benefits
it wants to enjoy. To explain the complexity involved in calculating ROI
consider this hypothetical case. A CIO is faced with buying six Unix servers
in order to add new services to his company's website. The consideration
isn't just the hardware that you buy but the business capability you achieve
in three weeks instead of three months. That kind of benefit is hard to
put a Rupee value on.
Sometimes financial justifications have to be tossed aside when an IT
investment simply makes good business sense. A statement made by the Chief
Technology Officer of General Motors Corporation, Tony Scott needs to
be considered. Scott says, "We have some areas where we use metrics
to measure (business returns). But in some instances, we have to go on