Application security starts with the developer
Building secure applications is hard. Software can automate
parts of it for you. Sachin Khanna, Sales Director, Alternate Channels,
Compuware Asia Pacific Pte Ltd. Asia South Region.
Building secure applications requires a certain mindset that
doesnt come naturally to most developers. First we try to get our logic
correct and our algorithms working, and build the application so that it does
fundamentally what its supposed to do. As we address the minor functionality
issues and bugs, we also strive to streamline the code to improve performance
and give the end user a better experience.
Last, if at all, we turn our attention to how secure that code actually is.
And at that point, there are only a limited number of things we can do to improve
it. This is where another mindset prevents us from identifying weaknesses in
code. Developers want to see their applications do well, and unconsciously dont
delve into the more difficult and obscure areas of the problem. Security, after
all, doesnt get people anywhere near as excited as a cool user interface
or useful features no one had thought of before. Lets concentrate on the
visible parts of our applications, because thats where the rewards are.
And when we do turn to security, we typically practice the maxim security
through obscurity, not addressing the problem, but rather hiding it in
unexpected parts of the code or surrounding it with difficult-to-comprehend
code. This may not be intentional, but often it serves as the best security
we can offer.
This mindset is a prescription for continuing along the same path applications
have always taken. Its an easy trap to fall into, especially because we
simply cant believe anyone would take the time and effort to hack our
application. All developers have busy lives, filled with development deadlines,
outside activities, and career and skills development. Its difficult to
imagine that someone can make the effort to painstakingly sit and pick at an
application for days or even weeks until he or she encounters a weakness, then
write code to specifically exploit that weakness.
But there is no denying that there are very talented, dedicated and patient
hackers who have ready access to the equipment and information they need to
disrupt an application for the expectation of theft, or for no reason at all
except to demonstrate their skills. Whether or not they target your application
is simply a matter of time and luck. It will happen, sooner or later.
The only defense is to not make it easy for them. If its easy, hackers
can do real damage to the application and possibly the business in general.
If its not very easy, its likely they will move on to other applications,
on other servers, that present better opportunities. At the very least, it may
take them so long that their attempts at compromise are noticed by those responsible
for monitoring applications in production.
No matter what you do, making an application completely safe from compromise
will be impossible. There is no application that is hackproof. Even if you take
reasonable precautions with the application, it must execute within the environment
of an operating system, hardware, network, and most important, end users. The
most talented and determined hacker can compromise just about any commercial
Consider also that there is an inverse relationship between the security of
an application and its applicability for a particular use. The only fully secure
application is one running on a computer in a locked vault, with no I/O. Such
an application, however, isnt typically very useful. The easiest-to-use
application is one that requires no authentication and provides the user with
full access to everything within its scope, without requiring passwords or denying
any information. This application, while potentially very useful, is fully open
The application developer has to strike a balance by providing a reasonable
set of access restrictions in an application while making useful its intended
tasks. This balance varies depending on the likelihood of a security threat,
the consequences of a breach, and the duties and sophistication of the user.
Setting out and working toward that balance is largely a function of application
design. Building an application that reflects the specified level of security
is more challenging. The primary problem is that just because code is accurate,
doesnt necessarily ensure its security. Secure code requires extra effort,
along with a detailed knowledge of Windows vulnerabilities and how to properly
code to protect against them. Most developers simply lack the experience to
pinpoint areas of vulnerability and how to fix them.
Ideally, these types of vulnerabilities, which are known but often obscure and
difficult to find and repair, could be identified and addressed automatically.
There is yet no silver bullet for that task, however, it is possible to ease
the process of writing code and analysing running code for weaknesses.
There are several stages in the development process where security can be applied.
Writing code is the most obvious and important place, for a couple of reasons.
First, it is the least expensive stage of the application life cycle in which
to get security right. Second, during coding the developer is in the best position
to identify and address potential security flaws.
Writing code with security in mind is a significant challenge.
The problem is that many constructs have potential vulnerabilities and can be
used in ways that either expose those vulnerabilities or hide them. Perfectly
correct and generally accepted code can be compromised, given the right circumstances.
Developers may not even be aware of many of the subtle nuances and side effects
of the algorithms and constructs they use.
One possible difficulty is finding back doors allowed by the language and platform.
For example, if you declare FileIOPermissionAttribute like this: FileIOPermissionAttribute
(SecurityAction. Deny, Read=@c:\passwords)
You will block access to the passwords directory, but only if its accessed
in that specific way. You can still use the UNC path (\\machine_name\passwords),
or another route to that same directory, unless you also identify all possible
routes and protect each of them.
These and many more detailed rules are difficult to remember, and to know when
By default, signed assemblies assign FullTrust permissions to public and private
methods in public classes, so they cant be executed by code that isnt
specifically given those permissions. By not signing your code or marking your
assembly with the Allow Partially Trusted Callers Attribute, you expose your
code to potential use by partially trusted or untrusted code. This could result
in your code being hijacked by code inserted in your process, or executing in
Analysing security at runtime
code is written and the application built, looking for security holes during
runtime is also a significant challenge. Here, the problems tend to be better
understood. The Microsoft .NET platform offers a sandbox approach to security,
so that issues with one application can be contained within its sandbox. That
doesnt solve the problem entirely, but it does mitigate it.
Larger problems arise when a .NET application calls native Windows code, through
a P/Invoke, COM callable wrapper, or other approach. The applications
flow of control leaves that sandbox, and is vulnerable to any weaknesses in
the code it executes outside of the .NET Framework.
These types of weaknesses tend to be better known, because they have been found
and exploited by hackers and virus writers over the years. But finding them
in code, especially code that was written years ago, is a tedious and error-prone
process. And developers writing new .NET applications often lack the time and
experience needed to find and fix these weaknesses.
Possibly the most infamous of these errors is the buffer overrun. In C and C++,
theres no internal consistency check to ensure that there is enough memory
allocated for the sise of a value. A developer can write those checks into the
code, but many neglect to. A user, operating from the command line or on a Web
page, can intentionally overrun the memory to try to gain control of the application.
If its local memory, it overwrites the stack. This often crashes the application
or at least the component, but occasionally it can allow malicious code to execute
once the stack is cleared.
There are other weaknesses to native Windows code that operate in a similar
way, by hijacking and using runtime memory. These include overwriting an array,
using uninitialised memory and assigning a pointer out of range. Even a COM
interface leak can be a weakness, as it can enable hostile code to gain control
of the interface and pass bad data back into the calling .NET application.
Finding these weaknesses means going over every line of code, listing every
declared variable, determining its allocated memory and putting error-checking
code in places that are vulnerable. In addition to the time and expense involved,
the potential holes are difficult to find in an application that may do millions
of allocations and deallocations during its lifetime.
The process of identifying many memory-related security holes can be automated
using automatic error detection tools. Simply running error detection can identify
the potential for overrunning buffers, overwriting stack value and similar potential
holes. A stack overrun error occurs in a running application and you will only
find this if such an error is injected into the application. This means that
this process should be combined with an active attempt at hacking to be completely
Data in the clear
A third place where checking for weaknesses is vital involves passing data between
components in a distributed application. Many developers believe that data passed
within the application is secure because the user doesnt have access to
it. However, a virus or hostile program can observe the application and either
determine its weaknesses in an attack, or how to best compromise data being
used within the application.
For example, a hostile program can observe critical data being passed in the
clear from one component to another. A clear text password typed on a web page
can be captured, providing access to the users account. Alternatively,
application developers have been known to embed the database password into a
script, rather than elsewhere in the application. Some developers even use the
database system administrator (sa) account and password for casual Web users
to make database queries. Any hostile program that captures this password has
full access to the production database.
Unfortunately, setting up database access and passing data across the tiers
of a distributed application are often among the many detailed decisions individual
developers make in implementing an application design. This level of implementation
isnt planned in advance. So unless individual developers are both trained
in security concepts and vigilant in their implementation, its very possible
that critical information in an application is vulnerable.
Identifying these vulnerabilities after the application is coded is a significant
challenge, because you have to be able to look inside the application to watch
how data moves about. Tools exist that can display data passed in the clear
between application components, to assist in identifying these and similar vulnerabilities.
The tip of the iceberg
There are an almost infinite number of code implementations that can result
in security weaknesses. Unless you specialise in identifying and repairing security
holes in applications, it is unlikely any single developer can know about even
a significant number of them. And that means your applications will almost certainly
have security holes. Some of them will be serious. Without help, these holes
will be in the production application. Having security holes doesnt necessarily
mean that your application will get hacked. You can rely on your good luck at
not drawing the attention of hackers for protection, or that your code is simply
too complex for a hacker to bother with. But both of those defenses will only
get you so far. Why make it easy for a hacker? Automating the identification
and diagnosis of security weaknesses will mean you need less luck, and can write
better code in the process.