Why Application Performance Is Important

APM is the practice of optimizing network service response time. It also entails managing the consistency and quality of individual and overall network services.

To help you assess why application performance is important, this section describes the following topics:

• Managing the entire process

• Isolating performance problems

• Sharing information in a common format

• Establishing and monitoring service level agreements

Managing an End-to-End System

APM is often talked about in terms of an end-to-end system. End-to-end is the entire process, from one discrete end (for example, a client station) through the network and transmission infrastructure to the application server. It is the entire path required for the application conversation to exist. The following exemplifies the end-to-end process:

1 .Initiation on the client station (such as when a user presses an enter key)

2. Travel through the various network components

3. Arrival onto the application final destination (such as a server)

4. Follow-up interaction as required (such as a response from the server to the client)

Business applications that support mission-critical activities (e-commerce, enterprise resource planning [ERP], supply-chain management, human resources, manufacturing, logistic systems, and so forth) not only consume huge amounts of bandwidth, they are often very sensitive to network delays.

Because these applications typically affect the operation and success of the entire business, it is essential that IT organizations be able to measure and manage end-to-end performance to keep their network environment running optimally.

User productivity and perception are key measures of the success of business computer applications. For better or for worse, transaction response times and application availability are the key indicators of user productivity and satisfaction.

Unlike traditional system and network management, performance management focuses on that gray area between network up and network down.

Most organizations do know when their network is up or down, but they don't really know what is happening on it in terms of application performance from an end-user perspective and whether a significant new application will perform acceptably once deployed over the network.

Isolating Performance Problems

Increased investment is not always required to significantly improve performance. Simply throwing money at a problem is not always the answerfor example, upgrading the CPUs on a server to gain better performance will only have an effect if the CPU is the bottleneck. Equally wasteful is the approach of simply installing a faster network interface, such as from 100 MB to 1 GB, without understanding the application characteristics. In many cases, if you just invest blindly you might well make the situation worse.

The confounding thing about performance management is its holistic nature. If you cannot easily determine where the delay is (server, client, or network), you cannot expect to find the root cause. Without the ability to isolate performance problems within the end-to-end path, troubleshooting and optimization efforts are just stabs in the dark, and might become quite costly.

Throwing network bandwidth at a response-time problem, for instance, might not necessarily improve performance. Although the overall application architecture can influence how well the application performs on a network and how efficiently the application uses network resources, the actual design and implementation details are the key variables. Some variables are within your control, and some are not. These variables include the following:

• Amount of traffic used in application conversations across the network

• Degree to which clients and servers are geographically separated

• Amount of network capacity available to the application

• Amount of delay on network paths

• Ability to prioritize mission-critical network traffic

• Overall availability of key network paths

• Efficiency of clients' and servers' network transport settings

In any case, application performance must be examined together with the network performance to control the quality of performance delivered to the end user.

Performance management is a perpetual task. It requires continued monitoring of key indicators to ensure that service levels are maintained.

To achieve network application optimization, the application and the constituent network should be evaluated in detail (preferably prior to rolling out the application), and wherever possible implemented as a methodology in the early stages of application development.

Sharing Information in a Common Format

In typical corporations, individual IT departments are highly focused on network infrastructure, systems, databases, or applications. This level of focus, although necessary, leads to finger pointing when performance issues arise.

Successful performance management initiatives require a cooperative and generalized focus on the entire picture. You need to understand the interaction and dependencies relevant to the successful delivery of the application from an end-to-end perspective.

The best way to ensure that a more cooperative approach exists is to ensure that information pertinent to the application performance be provided in a common format, allowing all parties to understand the respective parts that make up the delivery model.

In taking the common format approach, you are also never biasing the results to any one particular group rather showing it as a complete delivery system, allowing each separate area of responsibility to understand the impact their section has while still appreciating the importance and functional requirements of another area.

Establishing and Monitoring Service Level Agreements

Performance management is important for establishing and monitoring service level agreements (SLAs). Service level management (SLM) is a compelling concept for business, because if done right, it can supply proof points demonstrating that applications are meeting the business requirements. Not only is it important for an internal department to provide quantifiable feedback, it can be equally important for an enterprise that outsources their IT and has a need to ensure that their contractual requirements are being met.

The SLA can be taken as a baseline of how an application is delivered. Or if the dependencies of an environment do not match the SLA, it can help in identifying what is possible given the existing system, effectively saying this is the best service you will be able to achieve. This service definition could then be expanded to a cost-justification process. The SLA demands a specific performance that can be achieved only through extensive upgrades to the delivery system. With this information, the business can make accurate cost-related decisions to upgrade or not.

A Life Cycle Approach to Managing Networked Applications

Business and IT environments are in a constant state of flux. Businesses are constantly reinventing themselves in response to new market opportunities. Acquisitions and deregulation often force change upon organizations. IT departments must always be prepared to respond to these commercial changes while simultaneously getting up to speed on new technologies that can enhance business practices.

From business practices to applications architectures, a business that believes it has the ultimate answer tends to set things in concrete. For a changing business, strategic applications are always in a state of evolution. Iterative application development and deployment is a commonly held way of building and refining applications. IT architectural initiatives need to be in sync with business goals.

Performance management of applications means different things to different people, but it is this constant striving for application-delivery utopia that drives the application life cycle process.

Application life cycle is far more than simply writing code, testing, and then deploying; it takes into consideration the constant quest for application delivery that enables users in network operations, engineering, planning, and application development to be far more effective in optimizing performance and availability of their networks and applications.

Enterprise APM Needs

Think of the IT department as having two primary functions. First, IT is responsible for technology-driven initiatives to create new revenue opportunities, automate processes, and save money. This is IT's interesting side; the other is much more sedate. Second, IT is also responsible for keeping the trains running on timefor example, making sure applications are available and delivering acceptable performance to business users.

To solve performance problems effectively or to forecast computing and networking resource requirements proactively, you must understand how applications consume system and network resources. Due to the magnitude of this problem, even on a single system, network managers make attempts to divide the system resource-consumption aggregate into smaller, more manageable units often called workloads, applications, or subsystems.

To get the maximum value out of the characterization of application workloads, these workloads should map onto the application that the end user sees (for example, order entry, inventory control, receivables, customer information) and the task the application performs to support end-user business needs. These business transactions are the automatic processes of end-users' work, with the following characteristics:

• They are discrete. (Their beginning and end are bounded.)

• They can be mapped onto a business function, which in turn can be used as a forecasting element when applied to an organizational business situation.

The first characteristic is important because it determines where to start and end the monitoring of resource-consumption data surrounding a transaction. The second characteristic allows management processes such as SLM, capacity planning, and accounting and cost-recovery mechanisms to be applied based on business needs rather than on some arbitrary technology-related measure. This second characteristic thus enables the end user to relate resource consumption and the required capacity to key business factors.

Establishing an APM Strategy

Having explored the real-world need to understand, track, and monitor application performance as an indicator to overall business success, you should start to establish a strategy that can be put in place to facilitate a meaningful APM operation.

The aim of any APM strategy should have the ultimate goal of optimizing the application delivery so that it adds real benefit to the overall business model. To reiterate an earlier statement, to effectively troubleshoot or optimize application flow, you must be able to pinpoint the most probable cause of the delay. Simplistically, you can divide an application flow into covering three distinct areas, as follows:

• Client

• Network

• Server

Areas of Responsibility

In an overall application-delivery story, there are many different areas, each with different roles, functions, and responsibilities for each part of the delivery model (such as help desk, network engineers, network planning, and so on).

However, most IT organizations comprise the following basic functions:

• First-level help desk This group is the first point of contact for end users to report IT-related trouble tickets. This group can escalate the problem to a specialized support group for difficult issues that they are not able to resolve immediately.

• Specialized support This group has in-depth support skills to support the analysis and identification of root causes of performance problems as they relate to applications, servers, network infrastructure, and workstations.

• System administration This group installs, operates, and manages information systems equipment such as distributed servers, data centers, and workstations.

• System planning This group plans the overall information systems infrastructure, including the planning and provisioning of servers to meet business needs.

• Network operations This group manages the network operations center, and is responsible for the day-to-day monitoring and management of the network infrastructure.

• Network engineering This group implements hardware and software changes to network devices, and ensures all network devices are configured properly according to the organization's policies.

• Network planning This group designs the network infrastructure.

• Network security This group has the responsibility of preventing intrusion into the network as well as unauthorized access to network and system resources by internal users.

• Application-development teams This group has responsibility for the architecture, design, development, customization, and testing of applications.

• Application-deployment teams This group has responsibility for ensuring that applications are deployed successfully. The teams are staffed with skills from across the organization as appropriate for particular application deployments.

Implementation Triggers

Through the various stages of an applications life on your systems, you will face different requirements and have to implement different processes relevant to APM. Compelling situations, such as the completion of function testing of a new application, will generate specific requirements; these events are known as triggers. These triggers can be allocated to the following categories:

• Application performance troubleshooting/analysis

• Application assessments (networkability/application impact)

• Predictive performance analysis

These triggers will include both reactive and proactive scenarios; and although these issues can be triggered at any time, you will find different scenarios and issues will arise at specific timelines within an application's life cycle. These processes range from short-term activities (such as daily recurring operations or emergency response) to long-range strategy (such as capacity planning)

Application Performance Troubleshooting/Analysis

The first trigger is that of application performance troubleshooting and analysis. This will occur as soon as an application performance problem is reported or in situations where it is deemed the application performance is detrimental to the overall business performance.

This troubleshooting trigger can be deemed to be very reactive; by the very fact that it appears at the beginning of the time scale, you can assume that you have little or no time to plan for its occurrence. This is most likely where you will find yourself initiating your APM strategy.In short, the benefits that can be derived by applying an APM strategy that covers the end-to-end delivery system to cope with the reactive troubleshooting trigger include the ability to quickly determine root cause of poor performance (server, network, or client). This will ultimately assist in expediting speed to resolution for performance issues.

Application Assessments

When analysts talk about application networkability, what they are referring to is evaluating whether an application will perform as required when deployed on the network. This can apply equally to new application deployments (both in-house and off-the-shelf installations) as well as enhancements to existing applications, including version upgrades. APM implemented correctly at this trigger will benefit you by minimizing performance surprises when the application is deployed. It will also assist in conveying to the application developer's critical changes that need to be made prior to deployment, which will subsequently avoid potential application rework in a post-deployment mode (or late-stage predeployment).

Understanding how an application is going to perform in isolation is obviously only half the story. The other half is that of coexistence, understanding how the implementation (or upgrade) of an application will impact the existing applications. As previously mentioned, this trigger occurs at the same time as the networkability assessment and so is influenced by the same conditions. However, the outcome and benefits differ slightly, but are nonetheless important.

Understanding the application characteristics in accordance with transport system's underlying infrastructure and other applications will assist in the following:

• Enabling you to implement cost-effective infrastructure upgrades and therefore prevent overengineering

• Determining whether the cost of supporting the application (for example, network and server resources) outweighs the benefit of the application to the business

• Ensuring that the network performance and existing applications are not adversely impacted

Predictive Performance Analysis

The final long-term trigger centers on the area of capacity planning. All enterprises should consider regular planning to accommodate business growth, but it can also be triggered earlier in the life cycle through scenarios such as planning for network expansion due to merging organizations.

This APM strategy should cover all aspects of the end-to-end delivery model, and should incorporate contingency-type planning for failures. If implemented correctly, this strategy will bring the following benefits to an organization:

• Allowing for the provision of sufficient capacity with enough lead time

• Avoiding capacity-shortage crisis situations

• Preventing expensive overprovisioning

 

Thanks

 


Like it on Facebook, Tweet it or share this article on other bookmarking websites.

No comments