Storage: Getting the most from your SAN
Three-tier architectures with a Web-based front-end, an application
server and a database server are commonplace. With an increase in the number
of users, more data being stored at the back-end, and a consequent rise in the
number of requests that a database server has to address, it is harder than
ever to figure out where a bottleneck lies in this tangled mess. But solutions
are available in the market that help CIOs optimise application performance.
Vendors also suggest certain best practices which can help squash the speed
bumps in an enterprise data centre. By Rishiraj Verma and Aishwarya
companys growth is almost always the reason for bottle-necks. Along with
organisational growth, the data that is accessed daily by its employees also
grows. This translates into more users concurrently trying to access data. One
way out is to adding devices to the storage sub-system. However, in a traditional
file server environment, adding storage devices may increase an administrators
burdens and may have an adverse impact on performance and device availability.
Says Manish Bapat, National Manager, NAS and CAS, EMC India
and Saarc, As requirements grew, companies bought more devices.
Data in such cases tends to be accessed from each device separately. The picture
changed when the concept of networked storage was introduced, i.e. SAN and NAS.
This wasnt however a final solution to these problems. Data growth did
not stop and the need for innovation was felt again.
It is this constantly growing data that may be a cause of
further concern. According to Ketan Parekh, CTO, Sharekhan, One of the
critical problems while dealing with networked storage is accurately estimating
the growth or throughput of data. He is of the opinion that a correct
estimate here can help in getting the right number and size of the discs that
need to be used as part of the SAN.
Neeraj Matiyani, Business Manager, StorageWorks Division,
HP India, concurs with Parekhs view. Improper planning of the network
is a contributing factor to bottle-necks. Most organisations face this
problem because growth may exceed expectations. The first step is to understand
how much data may be generated and accessed in the near term, and there is no
room for miscalculation during this step.
Improper classification of data in the earlier stages may
be the cause of grief down the line. Says Soumitra Agarwal, Marketing Director,
NetApp India, Unstructured data accounts for about 75 percent of an organisations
storage capacity. He insists therefore that issues of classifying, indexing
and moving unstructured data to lower-cost tiers must be addressed well in time.
There might also be a sudden increase in the volume of data
being accessed over the network. An increase in the number of users, or again,
the growth of the organisation, could be the probable causes of this condition.
Says Shailesh Agarwal, Country Manager, Storage, IBM India, If the data
is old and is no longer being used, then archiving it to tapes would be the
most viable option. However, in the case of critical data, transferring it to
lower-cost SATA discs with the use of technologies such as storage virtualisation
would be ideal.
||Enterprise IT Management
||It enables management control across the enterprise
by integrating and automating the management of IT applications, databases,
networks, security, storage and systems across departments and disciplines.
It addresses the needs of businesses in four main categories: business service
optimisation, enterprise systems management, security management and storage
management. This integrated approach to IT management helps identify bottlenecks.
||The solution captures metrics for monitoring and
tuning multi-tier applications. It provides a view of how applications are
performing from an end-user's perspective. It also delivers the information
needed to fine-tune applications by pinpointing friction points in the end-to-end
path across multiple tiers; provides the ability to drill down into hot-spots
to identify the application-specific concerns; offers best practice recommendations;
and enables the easy validation of the effectiveness of any corrective action
||This remote IT management tool looks at various parameters,
and monitors processes to help figure out where a problem lies. It combines
an IT management platform with a secure remote access technology. It combines
desktop, server, network and application management into a single integrated
||The Infrastructure Optimisation Management layer
in HP's OpenView takes care of issues like virtualisation, performance management
and end-to-end user application performance management. HP Application management
solutions enable administrators to build manageable applications, optimise
them in pre-production, and manage the entire application environment in
production from both an end-user and infrastructure perspective. Users can
drill down to the component or even the method level of the application
to determine the root-cause of a problem.
||Tivoli Business Application Management
||It helps ensure availability and performance of business-critical
applications, including portal and service oriented architecture-based technologies.
It also assists in planning, management and optimisation of a customer's
software assets. The solution helps customers quickly isolate, diagnose
and fix business-critical application performance problems. When an incident
occurs, the Tivoli solution helps resolve it by facilitating the information
flow between operations, development and support groups.
A potential answer
Virtualisation, in one form or
the other, has always been considered key to reducing IT-related costs
and increasing efficiency. In the case of SANs too, virtualisation can
help cut down on an administrators list of problems
Once the CIO has identified the storage needs, he needs to look at ways of
ensuring that the infrastructure is used optimally, without the need for further
expenses and increasing efficiency.
Virtualisation, in one form or the other, has always been
considered key to reducing IT-related costs and increasing efficiency. In the
case of SANs too, virtualisation can help cut down on an administrators
list of problems. It can also help drastically reduce the complexity involved
in creating, editing and deleting files and applications over a network.
With regard to SAN optimisation, Global Namespace (GNS) seems
to be something of an answer to a CIOs troubles. GNS is a logical layer
that is inserted between clients and the file systems so that users can access
data independent of where a physical file is located.
GNS uses a set of paths and file names (in the form of URLs) that help access
specific data. Clients can use the same path for the data, regardless of its
geographical location. However, the use of the same path to access data does
not mean that the same caches are used to route all requests. As Soumitra Agarwal
explains, A GNS is separated from the actual componentsclients,
caches and servers that it is built from. It is a one-to-one mapping between
a piece of data and the path to it.
CIOs seem to be welcoming this idea. Parekh feels that GNS
is a good concept because it shields the end-user from the changes taking
place in the storage set-up where data resides. In effect, GNS does much
the same thing for storage that DNS does for networking. Clients are able to
access data without knowing where it is in the same manner that users access
a Web site without knowing its IP address. Apart from helping users, this also
helps administrators add, change, move, and reconfigure physical file storage
without affecting how users view and access the same.
Vivekanand Venugopal, Director, Software Solutions, APAC,
Hitachi Data Systems, is upbeat about it: It (GNS)
is an optimisation technique and will prove to be the
future of file management.
On the other hand, Sumit Mukhija,
Business Development Manager, Cisco Systems India &
Saarc, takes the concept with a pinch of salt. Its
a great concept, but its still a concept.
According to him, there is not much happening in the
market as far as GNS offerings are concerned.
GNS aims to change the way files are managed over a network.
Its other goals are to raise data utilisation, cut down on over-provisioning,
and resolve bottlenecks without necessarily having to add discs and arrays to
a SAN. GNS helps administrators support heterogeneous storage environments.
The concept also aims at optimising networked storage across different vendor
platforms and storage tiers.
Software to spot bottlenecks
||Dynamic Tracing (DTrace)
||Helps developers rapidly identify the root-cause
of system and application problems. Resolving system or application performance
bottlenecks can be reduced from days to hours. Can be used safely on production
systems. DTrace's single view of the software stack simplifies the tracing
process, enabling developers to follow a thread as it crosses between kernel
space and user and back.
||Windows Server 2003
||Windows System Resource Manager (WSRM)
||WSRM lets you set CPU and memory allocation policies
for applications. This includes selecting processes to be managed, and setting
resource usage targets or limits. It also allows the user to manage CPU
utilisation. It allocates resources through server consolidation to reduce
the ability of applications to interfere with each other. WSRM can be administered
using two different interfaces: the graphical user interface (GUI) and the
command-line interface (CLI). The GUI is provided by an administrative snap-in.
The CLI enables command-line scripting and supports advanced uses. Both
user interfaces provide access to the full functionality of WSRM.
||AIX Workload Manager (WLM)
||WLM delivers automated resource administration for
multiple applications running on a single server. This capability helps
to ensure that critical customer applications are not impacted by the resource
requirements of less-critical jobs in the system. The policy-based architecture
of WLM allows systems administrators to spend less time on routine workload
management tasks by automatically applying individually-tailored resource
||Red Hat Enterprise Linux
||The monitoring module allows users to track the performance
of the Enterprise Linux systems. It helps in receiving alerts regarding
system performance, allowing you to take action before problems arise. It
creates custom probes for applications not included in the pre-built probe
set, and configures warning and critical thresholds for each probe. Administrators
receive e-mail or pager alerts when thresholds are reached.
||HP-UX Workload manager (WLM)
||With HP-UX WLM, users can define objectives with
a priority which they can then assign to a WLM workload. HP-UX WLM provides
a passive mode that allows users to see how it will approximately respond
to a given configuration. Users can control Oracle database instances, adjusting
their CPU allocation-based desired transaction response time, the number
of users connected, etc. WLM and its SAP Toolkit enable users to identify
different SAP processes or instances and place them into separate workloads.