Maintaining software is hard and therefore expensive; and IT departments are often under-funded. However, if they are in a Just do it mode, then non-functional requirements are easy to be forgotten. The consequences of leaving these NFRs lead directly to the aforementioned maintenance problems and increased technical debt.
NFRs are necessary to finishing the IT applications journey. While one might consider two or three important NFRs (such as performance and security), one will probably not cover the others extensively, or you might miss out on them all together. And if one does allocate time to deal with them, when the schedule slips, the NFRs may be the first thing to get dropped. So, whether you plan for NFRs or not, chances are high you won't cover them 100%. One should try to avoid adding technical debt and maintenance nightmares to the application portfolio.
Performance is defined as the responsiveness of the application to perform specific tasks in a given span of time. It is scored in terms of throughput or latency. Throughput is the number of events in a given span of time while Latency is the time it takes to respond to an incident. An application's performance directly impacts software scalability. Enhancing application's performance often enhances scalability by virtue of reducing shared resource contention. Following is a list of design principles:
Separate long-running critical processes that might fail by using a separate physical cluster. For example, a web server provides superior processing capacity and memory but may not have robust storage that can be swapped rapidly in the event of a hardware failure.
The following is a list of the typical units of measurement:
Probability indicator:
Scalability is the capability of the application to handle an increase in workload without performance degradion, or the ability to quickly enlarge. Scalability is the ability to enlarge the architecture to accommodate users, processes, transactions, and additional nodes and services as the business system evolves to meet the needs of the future. The existing systems are extended as far as possible without replacing them. Scalability directly affects the architecture as well as the selection of hardware and system software components. Following is a list of design principles:
The following is a list of typical units of measurement:
Probability indicator:
The following are the typical units of measurement:
Probability indicator:
Reliability is the characteristic of an application to continue functioning in an expected manner over time. Reliability is scored as the probability that the application will not fail and that it will keep function for a defined time span. Reliability is the ability of the application to maintain its performance over a specific time span. Unreliable software fails frequently, and specific tasks are more prone to failure because they cannot be restarted or resumed. The following are the design principles:
The following are the typical units of measurement:
Probability indicator:
Maintainability is the quality of the application to go through changes with a fair degree of effortlessness. This attribute is the flexibility with which the application can be modified, for fixing issues or to add new functionality with a degree of ease. These changes could impact components, services, functionality, and interfaces when modifying for fixing issues, or to meet demands of the future. Maintainability has a direct baring on the time it takes to restore the application to a normal status following a failure or an upgrade. An enlightening maintainability attribute will enhance availability and reduce run-time defects. Following is a list of design principles:
faces-config.xml
file. Even modifying the existing functionality becomes easy by changing the respective Backing Bean classes.Following are the typical units of measurement:
Probability indicator:
Extensibility is a characteristic where-in the architecture, design, and implementation actively caters to future business needs.
Extensible applications have excellent endurance, which prevents the expensive processes of procuring large inflexible applications and retiring them due to changes in business needs. Extensibility enables organizations to take advantage of opportunities and respond to risks. but While there is an significant difference extensibility is often tangled with modifiability quality. Following is a list of design principles:
BackingBean
class and configuring it in the faces-config.xml
file. Modifying the existing functionality becomes easy by changing the respective BackingBean
classes.Typical units of measurement:
Probability indicator:
Security is the ability to avoid the malicious events and incidences of the designed system usage, and prevent loss of information. Establishing proper security enhances the reliability of system by reducing the likelihood of an attack succeeding and disrupting critical functions. Securing protects assets and prevents unauthorized access to sensitive data. The factors affecting security are integrity and confidentiality. The features used to secure applications are authorization, authentication, logging, auditing and encryption. Following is a list of design principles.
The following are the typical units of measurement:
Probability indicator:
The requirement for session failover enables that the data specific to user sessions needs to be available across all cluster nodes. This mechanism is required to be made invisible to the end users. Session replication is leveraged in application server to achieve session failover namely a session is replicated to other machines, every time the session changes. If a node fails, the load balancer routes in-coming requests to another server in the cluster since all machines in a cluster have a copy of the session. Session replication allows session failover, but it costs in terms of memory and network bandwidth.
Sharing of session data is also implemented through application-level APIs. This is enabled through the persistence layer that allows session data to be managed by other nodes when a failure occurs. Sessions stored in cookies are often cryptographically signed, but the data is not unencrypted. Session data stored in cookies should not contain any sensitive information like credit card data or other personal data. The session data can also be stored in a database or in Memcached component.
The efficiency of session management comes with a cost. Applications that leverage a session management mechanism must maintain each user's session state, stored in memory. This greatly increases the memory footprint of the application. If the session management capability is not implemented, applications will have a smaller footprint.
Probability indicator:
Java EE includes distributed transactions through two specifications:
JTA is an protocol that is implementation-independent and allows applications to handle transactions. On the other hand, JTS specifies the implementation of transaction manager for JTA and implements the OMG or Object Transaction Service specification for Java mapping. JTS enables transactions using the Internet Inter-ORB Protocol (IIOP). The transaction manager supports both JTA and JTS. The EJB container itself leverages the Java Transaction API to interact with JTS component. The transaction manager controls all EJB transactions, except for bean-managed transactions and Java Database Connectivity transactions, and allows beans to update multiple databases with a single transaction:
Transaction properties
These properties are known by the ACID acronym. Each letter stands for a property:
Transaction types
The following are the types of transactions:
Probability indicator:
Java Authentication and Authorization Service (JAAS) is leveraged to control authentication and authorization in the application. Authentication and role are stored in LDAP server. The UI framework is configured to connect with LDAP and control access according to the roles in the application. Access to the JSP page is controlled by the UI framework. Authentication is done via the HTTP form. Username is the e-mail address mostly and once the user submits the form with username and password, the application checks the details against LDAP server.
Probability indicator:
Instrumentation is the ability to measure the level of a IT system's performance, diagnose errors, and trace audit information. Instrumentation takes the form of code instructions that monitor specific components in an IT application, for example, logging information to appear on UT. When an application has instrumentation embedded, it can be managed using a management tool. Instrumentation is mandatory for reviewing the performance of an application.
Probability indicator:
Compliance means conforming to rules, principles, policies, standard or law. Regulatory compliance describes the goal organizations aspirations to achieve to ensure that they comply with laws and regulations. Due to the increase in regulation domains and the need for transparency, enterprises are adopting consolidated and harmonized sets of compliance controls. This approach ensures that all necessary governance policies can be met without the unnecessary duplication of effort from resources. Data retention is one part of regulatory compliance that is a challenge for many organizations. The security that comes from compliance with regulations can be contrary to maintaining user privacy. Data retention regulations requires the service providers to retain extensive logs of activities beyond the time for normal business hours.
Probability indicator:
Business Continuity (BC) reflects an organization's capability to maintain access to IT resources after unexpected system outages or natural calamities. A comprehensive BC plan and strategy defines an instantaneous failover to redundant resources hosted on-site, off-site, or on the cloud.
Disaster Recovery (DR) strategy provides an organization with recovery for its applications and data within an accepted recovery time, which can translate into hours or even days in the absence of DR plan. The goal is to achieve the best recovery time objective (RTO) and minimize data loss a good recovery point objective (RPO).
Probability indicator:
Such a list can be drawn up and maintained by getting together the heads of IT Operations, IT Architecture and other stakeholders who have an interest in the project's outcome and the solution it will create. The organization must decide who these stakeholder groups are-operations/support and security, architecture, and business.
The following is a checklist of NFRs:
NFR |
Attributes |
Performance |
|
Scalability |
|
Availability |
|
Capacity |
|
Security |
|
Maintainability |
|
Manageability |
|
Reliability |
|
Extensibility |
|
Recovery |
|
Interoperability |
|
Usability |
|
Table 1: NFRs KPIs
Probability indicator:
Clustering consists of a group of loosely connected servers or nodes that work in tandem are viewed as a single entity. The components of clusters are connected to each other through fast networks, each node running an instance of the operating system. Clustering provides superior performance and availability over single node, while being more cost-effective than single computers of comparable speed. The benefits of clustering include fault tolerance, load balancing, and replication.
Server clustering is a technique of inter-connecting a number of servers into one group, which works as a unified solution. Server clustering enables multiple servers together to form a cluster that offers higher power, strength, redundancy and scalability. The server clustering ensures high availability and performance, even if there are challenges with one of servers in the cluster solution. The main objective of clustering is to achieve greater availability. A typical server runs on a machine and it runs till there is no hardware failure or some other failure. By creating a cluster, one reduces the probability of the business service and components becoming unavailable in case of a hardware of software failure.
Here are the two types of clusters:
The following are the benefits of clustering:
These are the disadvantages of clustering:
Middle tier clustering
Middle tier clustering is configured in the middle tier of an IT application. This is a common topology as many applications have a middle tier and heavy loads are serviced through the middle tier requiring high scalability and availability. Failure of middle tier causes multiple systems and components to fail. Therefore one of the approaches to do clustering is at the middle tier of an IT application. In the Java world, it is common mechanism to create EJB containers clusters that are leveraged by many clients. Applications with the business logic that needs to be shared across multiple clients can leverage middle tier clustering.
Probability indicator:
Distributed caching, compared to in-memory caching, is one of the common caching mechanisms that deal with large and unpredictable data volumes. In an IT landscape with multiple application instances, consistency of cache and reliability in case of node failures are critical challenges. An in-memory cache resides in the memory of the application instance. So if a node goes down, its cached data is lost. This means another instance of the application has to fetch the data all the way from the database and populate its in-memory cache which is an overhead and inefficient.
Distributed cache stays outside of the application instance. Although there would be several 'cache nodes' part of this distributed cache solution (for high availability), the object is passed from the application to the cache through a single API. This preserves the state of the cache, and results in data consistency. The cache engine determines which node to get the object from, leveraging a hashing algorithm. Also, since the distributed cache isn't linked to any one node, node failures wouldn't always make a difference to the application performance.
Examples of distributed cache solutions are Memcached, Redis, and HazelCast.
Probability indicator:
Capacity planning is a technique of estimating the space, hardware, software and network infrastructure resources that will be needed by an enterprise over future time span. A capacity concern for many enterprises is whether resources will be in place to handle increasing demands on the workload as the number of users increase. The aim of planner is to plan so that new capacity is added just in time to meet the future needs but ensuring the resources do not go unused for a long period. The successful planner is one that makes the trade-offs between the present and the future and proves to be the most cost-efficient.
The planner, leverages business plans, forecasts, trends, and outlines the future needs for an organizations. Analytical modeling tools are also leveraged by planner to get answers to different scenarios and options. The planner leverages tools that are scalable, stable and predictable in terms of upgrades over the life of the product.
Probability indicator:
The table lists various tools that are leveraged by the SMEs for troubleshooting performance issues in Java applications:
Tool |
Description |
VisualVM |
jconsole comes with JDK 1.5. It is Java Monitoring and Management Console -JMX compliant tool for monitoring a Java VM. It can monitor both local and remote VMs. |
VisualVM |
VisualVM integrates with various existing JDK softwares and provides lightweight CPU and memory profiling capabilities. This tool is designed for both production and development environments and enhances the capability of monitoring and analysis for solutions. |
HeapAnalyzer |
HeapAnalyzer allows finding possible heap leak area through its heuristic engine and analysis of the heap in applications. It analyzes heap dumps by parsing the heap dump, creating graphs, transforming them into trees, and executing the heuristic search engine. |
PerfAnal |
PerfAnal is a GUI tool for analyzing performance of applications on the Java Platform. One can leverage PerfAnal to identify performance problems in code and locate areas that need tuning. |
JAMon |
JAMon is a free, simple, high performance, thread safe, Java API that allows developers to easily monitor production applications. |
Eclipse Memory Analyzer |
Eclipse Memory Analyzer is a feature rich Java heap analyzer that helps find memory leaks and reduce memory consumption. |
GCViewer |
GCViewer is an open source tool used to visualize data produced by the VM options: |
HPjmeter |
HPjmeter diagnose the performance problems in applications on HP-Unix, Monitors Java applications and analyzes profiling data. HPjmeter also captures profiling data and improves garbage collection. |
HPjconfig |
HPjconfig is a configuration tool for tuning your HP-UX system kernel parameters to match the characteristics of application. HPjconfig provides kernel parameter recommendations tailored to your HP-UX platform. When given specific Java and HP-UX versions, HPjconfig will determine if all of the latest HP-UX patches required for Java performance and functionality are installed on the system, and highlight any missing patches. |
Table 2: Performance Tools
Probability indicator:
A load balancing mechanism is leveraged for distributing user loads across multiple nodes. The most common algorithm for a load balancing algorithm is the Round Robin algorithm. In this mechanism the request is divided in a circular format ensuring all nodes get an equal number of requests and no single node is over or under utilized. The failover mechanism switches to another nodes when one of the node fails or becomes offline. Typically a load balancer is configured to support fail over to another node when the primary node fails. To achieve least downtime, most load balancers support the feature of heart beat check which ensures that target machine is responding and is up and running. As soon as a heartbeat signal fails, load balancer stops sending the requests to that node and redirects the requests to other node of the cluster solution.
The most common load balancing techniques for applications are:
The benefits of load balancing:
Benefit |
Description |
Redundancy |
Redundancy is a mechamisn of running two or more identical servers, providing a fail-safe behaviour in the event of one server going offline. With load balancing, the system "senses" when a server becomes unavailable and instantly reroutes traffic to other nodes in the cluster. A load-balanced architecture always includes a secondary load balancing device, guaranteeing full redundancy across every component of the network. |
Scalability |
Even if there is currently a modest resource requirement, scalability should always be given a consideration for the solution. In a few months, there may be a requirement to add more horse power to your cluster. The smart thing to do is to configure load balancing, which not only lets you get the most out of your current hardware, but also lets you scale your application as needed. |
Resource optimization |
With a load balancer in place, one can optimize the traffic distribution to the server cluster to ensure best performance. Load balancers also speed up the application by taking over time-consuming tasks such as SSL encryption and decryption. |
Security |
The appliction's security is improved many-fold. Load balancing exposes only one IP to the external world, that is, the Web, which greatly lowers the number of breach points for any attack. The topology of the IT network is hidden, which improves the safety of the entire setup. The servers from the cluster receive virtual IPs, which enables the load balancer to route traffic as needed without exposing the addresses to hackers. |
Table 3: Benefits of LoadBalancing
Load balancing tools
There are both hardware and software load balancing solutions. Hardware load balancers are usually located in front of the web tier and sometimes in between web and application tiers. Most software clustering solutions include load balancing software that is leveraged by upstream software components. A reverse proxy, such as Squid, is used to distribute the load across multiple servers. Open source load balancing software solutions include Perlbal, Pen, and Pound. Hardware-based solutions include BigIP and ServerIron, among others.
Probability indicator:
IP address affinity is a common mechanism for load balancing. In this method, the client IP address is associated with a server node. All requests from a client IP address are serviced by one server node. This technique is very simple to deploy since the IP address is available in the HTTP header and no additional configuration is required to be done.
This type of load balancing can be useful if the clients are likely to disable cookies. There is a flipside to this approach, however. If the users are behind a network address translation or NATed IP address, then all of them will end up routed to the same server node. This may lead to uneven load on the server. NATed IP addresses are very common; in fact, if one is browsing from an office network, it's likely that NATed IP address is leveraged.
Probability indicator:
3.138.35.229