Standardise, consolidate, automate
Seema Ambastha, Director, Sales Consulting, Database
Technologies, Oracle India, talks to Dominic K about the factors that
have an impact on database uptime, and how grid computing can integrate with
Can you highlight the criticality and specific set-up requirements
for data storage and retrieval in todays enterprises?
In todays world, data is one of the most valuable assets
of an organisation. Businesses are dependent on IT and ITES for day-to-day operational
and strategic decisions. The decisions could be made after the churn-out of
huge volumes of operational data every minute.
Since information has become an asset, organisations need to maintain and manage
their information to facilitate decision-making and comply with various regulatory
and compliance requirements. In order to manage and maintain the information,
a robust and reliable enterprise infrastructure is essential. This is where
databases play a critical role. A database forms one of the most important underlying
foundation aspects for multiple applications in the data centre, be it core
banking, ERP or CRM applications.
In addition to the hardware and network infrastructure, databases are most critical
in todays data centre set-ups. The trends in data centres are moving from
the infrastructure-centric to the information-centric approach.
What are the factors that contribute to the smooth and
steady functioning of a database?
Businesses across verticals are increasingly information-centric and service-oriented.
Enterprises need the ability to run OLTP (online transaction processing) and
batch data processing to create an integrated class of data processing with
One of the essential guidelines I would like to suggest concerns
what customers should consider for the steady functioning of database deployment.
They should include the creation of a standardised foundation layer consisting
of hardware and operating system with continuous availability built-in at all
layers. The identification of data management techniques with security should
be defined i.e. authentication, access, auditing and finally the pro-active
Apart from these, planning and building capacity based on business demands is
very essential, and this can be achieved by providing scalable and reliable
underlying hardware architecture.
What would you say are the problems related to a heterogeneous
infrastructure which might hinder the efficient functioning of a database?
Todays IT deployments are
based on the project by project approach. This is one of the
reasons for the creation of heterogeneous infrastructure. It alternatively
results in complexities like
point-to-point integration, islands of data, lack of availability and
If we take IT deployments today, they are based on the project by project
approach. This is one of the reasons for the creation of heterogeneous infrastructure.
It alternatively results in complexities like point-to-point integration, islands
of data, lack of availability and fragmented security.
The ripple effect is reflected in driving the acquisition and management costs
very high. These issues are inter-linked as they stem from the fundamental issue
of fragmented application frameworks and data, making it impossible to enable
a 360-degree visualisation of the business across all dimensions.
Due to the heterogeneous nature of the deployment, issues of inconsistency and
dependability crop up which impacts service levels and efficiency. This will
not permit IT to align with the organisational business priorities.
So how can the problems with heterogeneous environments
be solved when it comes to the grid computing environment?
When it comes to the concept of commercial grid computing, this can be solved
with three key stepsstandardise, consolidate, automate. For example, this
can be achieved through deployment of infrastructure with industry-standard
servers, standard operating systems, and storage components to virtualise the
layers of computing from storage, database servers and application servers.
This will provide a single, highly-available, scalable and robust architecture.
Such an approach should help resolve various issues, thereby enabling organisations
to increase service levels and lower the cost of the underlying technology.
What are the advantages that application clusters can provide?
How does Oracle RAC measure up on this front?
In the scale-up model of computing,
once a server has been fully configured with CPUs and memory, the next
step would be mostly an expensive fork-lift upgrade. To avoid
this we released Oracle 9i with Real Application Clusters (RAC)
In the scale-up model of computing, once a server has been
fully configured with CPUs and memory, the next step would be mostly an expensive
fork-lift upgrade. To avoid this we released Oracle 9i with Real
Application Clusters (RAC) in June 2001.
Oracle 9i RAC runs with key hardware architectural limitations.
This will make it possible for a collection of database servers to co-operate
in the management of a single Oracle database. It allowed the delivery of greater
scalability and availability than had previously been possible while simultaneously
reducing cost and improving flexibility.
In January 2004 Oracle released the second generation of its clustered database
technology, Oracle RAC 10g. Oracle RAC can run packaged or custom applications
unchanged in a cluster of low-cost servers.
In the event of one server failing, Oracle RAC continues to run. Besides, when
users need more processing power, Oracle RAC allows them to add another server
without taking them offline. RAC is optionally priced, and can be installed
on multiple servers without any ceiling on the processor count or resources.
How does Oracle provide performance, availability and scalability
of databases with RAC?
The underlying architecture which enables RAC is called Cache Fusion. This is
a new shared cache architecture that provides applications. It does so by exploiting
rapidly emerging disc storage and interconnects technologies.
RAC scales applications to multiple nodes without any modification of the application
or changes to data placement. Application users can log onto a single virtual
high-performance cluster server.
Node addition in the database will be without manual intervention. Such interventions
are neither required to partition data when processor nodes are added nor when
business requirements change. Nodes added to the cluster are automatically utilised.
Cluster resources are dynamically re-balanced for optimal cluster utilisation.
RAC architecture provides customers near-continuous access to data with minimal
interruption by hardware and software component failures.
The system is resilient to multiple node failures, and component failures are
masked from end-users. Applications and users are automatically and transparently
reconnected to another system, and applications and queries continue uninterrupted.
What are the benefits enterprises can expect after RAC
In terms of direct and indirect cost savings, Oracle RAC would offer a multitude
of benefits to enterprises. Key benefits would be the elimination of planned
and unplanned downtime of the critical data servers, thereby providing continuous
availability of the services. Within the existing resources, more computing
resources can be added without much change in the existing architecture design.
It will help consolidate many databases, provide availability, reduce operating
costs and provide continuous availability to the applications. The ability to
run mixed workloads like OLTP and batch on the same database cluster will reduce
the need for separate database deployments in the enterprise database architecture.
The deployment will help enterprises to plan and reduce the total cost of ownership
by pay-as-you-grow (i.e. pay only for your current requirement).