Hire QA – Specialized in QA Recruitment, Technical Interviews and Testing Solutions

Performance testing is a specific type of testing that determines a system’s responsiveness to a particular workload in a particular context.
It also allows to verify and validate non-functional attributes such as scalability, reliability, availability, and resource usage.

  • Load Testing – Load tests are performed to evaluate the system’s behavior under an expected number of concurrent users who perform a specific number of transactions during a predefined time. These tests make it possible to measure the response times of the transactions, as well as to evaluate if the system resources limit the system’s performance.
  • Stress Testing – Stress tests are performed to determine the system’s behavior when faced with extreme loads. They are performed by increasing the number of concurrent users and the number of transactions they execute, exceeding the expected load. They give an insight into how the system’s performance is, in case the actual load exceeds the expected load, determining which components and/or resources fail first and limit the system’s performance.
  • Spike Testing – These types of tests are very similar to stress tests, but are performed over short periods of time, simulating significant load changes at a given time. They allow us to know the behavior of the system when there is a spike in its use, and to evaluate whether the system is able to return to a stable state afterwards. In JMeter, Spike testing can be performed using Synchronizing Timer. This timer keeps on blocking the threads until a particular number of threads get reserved. It then releases them at once thus creating large instantaneous load.
  • Endurance Testing – Endurance tests allows us to determine whether the system can support an expected load continuously during a period of time that corresponds to the context of the system use. They help evaluate the performance of the different resources; for example, if there are memory leaks, or degradations due to bad management of the database connections, among others.
  • Scalability Testing – Scalability tests are performed to evaluate the system’s ability to grow. Typically, the number of concurrent users, the number of transactions they can perform, the volume of information in the database, and other non-functional aspects of the system, are projected into the future.
  • Volume Testing – Under Volume Testing large no. of. Data is populated in a database and the overall software system’s behavior is monitored.
  • HyperText Transfer Protocol (HTTP) is a protocol using which hypertext is transferred over the Web.
  • The data (i.e. hypertext) exchanged using http isn’t as secure. hyper-text exchanged using http goes as plain text i.e. anyone between the browser and server can read it relatively easy if one intercepts this exchange of data.
  • But why do we need this security over the Web? Think of ‘Online shopping’ at Amazon or Flipkart. You might have noticed that as soon as we click on the Check-out on these online shopping portals, the address bar gets changed to use https. This is done so that the subsequent data transfer (i.e. financial transaction etc.) is made secure.
  • And that’s why https was introduced so that a secure session is setup first between Server and Browser. In fact, cryptographic protocols such as SSL and/or TLS turn http into https i.e. https = http + cryptographic protocols. Also, to achieve this security in https, Public Key Infrastructure (PKI) is used because public keys can be used by several Web Browsers while private key can be used by the Web Server of that particular website. The distribution of these public keys is done via Certificates which are maintained by the Browser. You can check these certificates in your Browser settings. We’ll detail out this setting up secure session procedure in another post.
  • Also, another syntactic difference between http and https is that http uses default port 80 while https uses default port 443. But it should be noted that this security in https is achieved at the cost of processing time because Web Server and Web Browser needs to exchange encryption keys using Certificates before actual data can be transferred. Basically, setting up of a secure session is done before the actual hypertext exchange between server and browser.
  • HTTP Works at Application Layer and HTTPS works at Transport Layer.
  • In HTTP, Encryption is absent and Encryption is present in HTTPS as discussed above HTTP does not require any certificates and HTTPS needs SSL Certificates
  • Test Planning – Define test scenarios, workload modelling
    • Test Data
    • Test infrastructure
    • Acceptance Criteria
  • Test Scenarios Specification
    • Test Scenarios
    • Test Scripts
  • Test Automation – develop the scripts
  • Test Environment Setup
  • Test Execution
    • Baseline
    • Scenario Execution
    • System monitoring
  • Results analysis
    • Re-execution of the tests
    • Test report
  • Extended response time of user
  • Extended response time of server
  • High CPU usage
  • Invalid data returned
  • HTTP errors (4xx, 5xx)
  • Lots of open connections
  • Lengthy queues of requests
  • Memory leaks
  • Extensive table scans of database
  • Database deadlocks
  • Pages unavailable

Network architecture refers to how computers are organized in a system and how tasks are allocated between these computers. Two of the most widely used types of network architecture are peer-to-peer and client/server.

The configuration, or topology, of a network is key to determining its performance. Network topology is the way a network is arranged, including the physical or logical description of how links and nodes are set up to relate to each other.

The term topology was introduced by Johann Benedict Listing in the 19th century.

Application architecture is formed by several components and there could be dozens of bad performance symptoms in each component. Being a good performance tester, one must know the list of performance symptoms on each tier to diagnose bottlenecks effectively.
Below is the detailed list of symptoms of each of the web applications 3-tier component.

  1. Network Performance Bottlenecks
  2. Web Server Performance Bottlenecks
  3. Application Server Performance Bottlenecks
  4. Database Server Performance Bottlenecks
  5. Client Side Performance Bottlenecks
  6. Third Party Services Performance Issues

Network bottlenecks contribute very little however they are important enough to be discussed in detail because you cannot afford minor issues as well because they can lead to disasters. Following are the major network performance symptoms in context of 3-tier web applications,

  • Load balancing ineffectiveness
  • Network interface card insufficient/poor configuration
  • Very tight security
  • Inadequate over all bandwidth
  • Pathetic network architecture

Like network performance bottlenecks, web server bottlenecks don’t have major contribution to the performance issues as well. Web servers act as a liaison between client and processing servers (application and database). So web server performance bottlenecks need to be addressed properly since they can affect other components performance to great extent.
Below is the list of bottlenecks which can affect web server performance,

  • Broken links
  • Inadequate transaction design
  • Very tight security
  • Inadequate hardware capacity
  • High SSL transactions
  • Server poorly configured
  • Servers with ineffective load balancing
  • Less utilization of OS resources
  • Insufficient throughput

Business logic of an application resides on the application server. Application server hardware, software and application design can affect the performance to great extent. Poor application server performance can be a critical source of performance bottlenecks.

Below is the list of application server bad performance causes,

  • Memory leaks
  • Useless/inefficient garbage collection
  • DB connections poor configuration
  • Useless/inefficient code transactions
  • Sub-optimal session model
  • Application server poor configuration
  • Useless/inefficient hardware resources
  • Useless/inefficient object access model
  • Useless/inefficient security model
  • Less utilization of OS resources

Object caching, SQL and database connection polling are the main causes of application server bottlenecks and they contribute 60% to the application server. 20% of the times inefficient application server causes poor performances.

Database performance is most critical for application performance as this is the main culprit in performance bottlenecks. Database software, hardware and design can really impact the whole system performance.

Following is the comprehensive list of database poor performance causes,

  • Inefficient/ineffective SQL statement
  • Small/insufficient query plan cache
  • Inefficient/ineffective SQA query model
  • Inefficient/ineffective DB configurations
  • Small/insufficient data cache
  • Excess DB connections
  • Excess rows at a time processing
  • Missing/ineffective indexing
  • Inefficient/ineffective concurrency model
  • Outdated statistics
  • Deadlocks

1. Measurable Scenarios
2. Most frequently accessed scenarios
3. Business critical scenarios
4. Resource Intensive Scenarios
5. Technology specific scenarios
6. Stakeholder concerned scenarios
7. Time dependent frequently used scenarios
8. Contractually obligated scenarios

  • Actual users can experience major slowdown in application response
  • Real users may not be able to complete their business transactions due to the slow response time
  • Application can be slow even after test completion due to the data generated during the performance test execution
  • Real user can start experiencing application errors and even the application can stop responding
  • It will be difficult to identify the root-cause of the performance bottlenecks in the presence of real users along with simulated users load
  • Real users need to stop the work on the application to get accurate test results but it will make the application unavailable during this time, which might not be possible on business critical applications
  • Third party Content Deliver Network (CDN) performance is not tested
  • Firewall effects on application performance are not tested
  • Application load balancing is not tested in test environment
  • Application internet connection performance is not tested
  • DNS lookup time is not tested in test lab
  1. Server Infra – Replicating a number of physical servers at each application tier in test env is a real challenge.
  2. Network Infra – Preparing the servers network infra is possible with some efforts but deploying the test servers on production servers locations is a hard thing to do.
  3. Number of Application Tiers – Test environment should have the exact number of application tiers as the production environment to achieve the accurate results which is also a challenge.
  4. Database Size – A database with different size will not be able to generate accurate test results.
  5. Load Injection from Different Geographical Locations – simulating users’ locations is also important to achieve proper test results.
  6. IP Spoofing Implementation – In few cases, load balancing is implemented based on the IP addresses. Users are distributed on different servers based on their IP addresses and if all incoming requests are from the same IP address, they will be directed to a single server and hence load balancing will never be observed.
  1. Complete Knowledge of AUT Production and Test Environment – The details of AUT production environment should be completely documented and understood in the initial stage of the performance testing. Performance testing engineer must know the AUT architecture details and ensure that the exact architecture is being implemented in the test environment.
  2. Test Environment Isolation – It’s highly recommended that no other activity should be carried out on the performance test environment during the test execution. Performance test results can greatly vary and it’s always difficult to analyze and reproduce performance bottlenecks in a test environment where other users are also interacting with the system.
  3. Network Isolation – So in order to provide the maximum network bandwidth to your test environment, one solution is that you should isolate your test network from other users.
  4. Load Injectors Requirements – Load Injector machines should have sufficient hardware resources to support the running users.The amount of load that can be generated from one load injector depends on various factors like machine resources (RAM, CPU, Disk), network bandwidth, script complexity and think time etc.
  5. Test Data Generators – a number of database records always have a great impact on performance test results. Reading a record from 1000 rows will be much faster than reading it from 10,000 database records.
  6. Proxy Servers Removal from Network Path – Having a proxy server between the client and the web server can affect performance results. In case of having a proxy server in middle of client and web server, the proxy will serve the client with data in cache instead of sending requests to web server, which results in lower AUT response time than actual. The issue can be resolved either by bringing the web server in an isolated environment or by hitting directly to the web server by editing HOSTS file by including server IP address.
  7. Complete Servers Access – Complete servers access during the test helps in identifying all server resources and bottlenecks root-causes.
  8. Simulate Clients Closer to Web Server – Latency can be one of the major factors in application response time. Closer users requesting from less distance will receive less response time as compared to the users sitting on long distance. Moreover, less network issues will occur on simulating closer user requests.

Advantages:

  1. No need to reproduce the production site data set
  2. It helps in validating performance test results performed on test environment
  3. It reduces test infrastructure cost and time
  4. Application recovery process and its complexities are well known

Disadvantages:

  1. Real application users will receive slower application and errors
  2. Difficult to identify the bottleneck root cause in presence of real application users
  3. Real users access might have to be blocked to properly achieve the performance test results
  4. In case of generating lots of data on production database, database may become very slow even after the test

Advantages

  1. Cost effective and results can be mapped by using extrapolation techniques
  2. Easy to setup as it requires less infrastructure
  3. Easy to identify application bottlenecks and tune it on the scaled environment

Disadvantages

  1. It’s difficult to find out performance issues past the scaled environment
  2. Application tolerance and capacity is reduced on scaled environment and more performance issues are revealed in production

Advantages

  • Cloud testing provides the flexibility of deploying the production system on discrete environment to conveniently test the application
  • It’s extremely simple to fix the defects and quickly configure the changes
  • It reduces the test cost due to its convenient rental models
  • It provides greater test control to simulate required user load and to identify and simulate the bottlenecks

Disadvantages

  • Security and privacy of data is the biggest concern in cloud computing
  • Cloud computing works on-line and completely depends on the network connection speed
  • Complete dependency on Cloud Service Provider for quality of service
  • Although cloud hosting is a lot cheaper in long run but its initial cost is usually higher than the traditional technologies
  • Although it’s a short-term issue due to the emerging technology and it’s difficult to upgrade it without losing the data

Service virtualization is a mode used to simulate the AUT specific components behavior which is not accessible to test the application completely. Through service virtualization, complete AUT is not emulated rather only specific and required components are emulated to fulfill the requirements.

Advantages

  • It helps in emulating realistic performance for dependent application
  • It provides access to AUT constrained components at convenient times
  • It helps in testing application performance with different parameter settings
  • It helps in simulating extreme loads on third party components without much additional costs
  • It can test the AUT components performance which are not yet completely developed

Disadvantages

  • Virtualized components performance vary greatly from live AUT components which yield incorrect performance results
  • There isn’t any guarantee that AUT performing as per requirements when tested it in virtualized environment will also performed similarly in production
  • You can’t virtualize all the complex and secure systems
  • You can’t emulate the production data in virtualized environment which can greatly affect the performance of AUT
  • Budget, Types of license, Vendor support and online forums, Protocol support, Scripting languages, Protocol analyser
  • Record and Playback options, Data Parameterization, Checkpoints, Transactions, Actions, Iterations, Built-in functions, Custom Functions for reusability, Compare scripts utility,
  • Bandwidth simulation, Browser support / compatibility, Log Levels, Real time Workload Model, Scheduling, IP Proofing
  • Intuitive Graphs and Charts for identifying bottleneck, Different formats of Result Generation like *.html, *.csv, *.xls, *.xlsx, *.pdf etc., Diagnostics,
  • Resource Monitoring, Batch execution
  • Direct jump to multi-user tests
  • Test results not validated
  • Unknown workload details
  • Too small run duration
  • Lacking long duration sustainability test
  • Confusion on definition of concurrent users
  • Data not populated sufficiently
  • Significant difference between test and production environment
  • Network bandwidth not simulated
  • Underestimating performance testing schedules
  • Incorrect extrapolation of pilots
  • Inappropriate base-lining of configurations

Benchmark Testing – It is the method of comparing performance of your system performance against an industry standard that is set by other organization.
Baseline Testing – It is the procedure of running a set of tests to capture performance information. When future change is made in the application, this information is used as a reference.

Profiling tools are used on running code to identify which processes are taking the most time.

Profiling tools are usually only used once it has been identified that the system has a performance problem — if the server is taking a long time to respond, and you can’t identify any resource problem (e.g. lack of memory or poorly configured garbage connection), profiling might be your next choice of strategy.

1. Client Side – A performance testing tool gathers all the client-side metrics and displays them either during the test or at the end of the test. Client-side metrics are the initial points to start the investigation of the bottlenecks.
2. Server Side – An application monitoring tool needs to be set-up to collect the server-side stats. Following the clue getting from the client-side graph and correlating with the server-side metrics will provide the exact address of hidden bottleneck.

Server-side stats include GC analysis, Heap dump analysis along with basic CPU and Memory usage graph.
3. Network Side – This part covers the performance metrics of the network. Such metrics help to find out the obstacles of the network which increase the network latency. Low bandwidth is one of them which occurs so often.

Performance TestingPerformance engineering
Performance Testing verifies how a system will perform under production load and to anticipate issues that might arise during heavy load conditions.Performance engineering aims to design the application by keeping the performance metrics in mind and also to discover potential issues early in the development cycle.
Performance Testing is a distinctive QA process that occurs once a round of development is completedperformance engineering is an ongoing process that occurs through all phases of the development cycle i.e. from the design phase to development, to QA.
A dedicated performance tester or team conduct the Performance Testing who has sound knowledge of performance testing concept, tool operation, result analysis etc.A Performance Engineer is a person who has enough knowledge of application design, architecture, development, tuning, performance optimization and bottleneck root cause investigation and fixing.
When a bottleneck is identified during performance testing then the role of the performance tester is to analyse the test result and raise a defect.performance engineer is to investigate the root cause and propose the solution to resolve the bottleneck.
  • Cookies are text files with small pieces of data — like a username and password — that are used to identify your computer as you use a computer network. Specific cookies known as HTTP cookies are used to identify specific users and improve your web browsing experience.
  • Data stored in a cookie is created by the server upon your connection. This data is labeled with an ID unique to you and your computer.
  • When the cookie is exchanged between your computer and the network server, the server reads the ID and knows what information to specifically serve to you.

With a few variations, cookies in the cyber world come in two types: session and persistent.

Session cookies are used only while navigating a website. They are stored in random access memory and are never written to the hard drive.

When the session ends, session cookies are automatically deleted. They also help the “back” button or third-party anonymizer plugins work. These plugins are designed for specific browsers to work and help maintain user privacy.

Persistent cookies remain on a computer indefinitely, although many include an expiration date and are automatically removed when that date is reached.

Persistent cookies are used for two primary purposes:

Authentication. These cookies track whether a user is logged in and under what name. They also streamline login information, so users don’t have to remember site passwords.
Tracking. These cookies track multiple visits to the same site over time. Some online merchants, for example, use cookies to track visits from particular users, including the pages and products viewed. The information they gain allows them to suggest other items that might interest visitors. Gradually, a profile is built based on a user’s browsing history on that site.

The HTTP protocol used to exchange information files on the web is used to maintain the cookies.

There are two types of the HTTP protocol. Stateless HTTP and Stateful HTTP protocol. The stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol does keep some history of previous web browser and web server interactions and this protocol is used by the cookies to maintain the user interactions.

Whenever a user visits a site or page that is using a cookie, the small code inside that HTML page (Generally a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file on users machine called a cookie.

Here is one example of the code that is used to write a Cookie and can be placed on any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When a user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. The expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.

1. Understand the objective for workload model creation (performance testing or capacity planning)
2. Understand system objectives, application landscape and end users of the system
3. Understand the business drivers that impact the end user load on the system.
4. Identify the top critical use cases and navigation patterns
5. Identify the user load distribution level across use cases and its transactions. Workload characterization should focus on creating realistic load on all tiers.
6. Gather the average and peak access statistics like users per unit time, page views per unit time, average session duration, current and target load levels, user abandonment rate, etc.

1. System understanding
2. identify key use cases
3. identify transactions, test data and think times to be used
4. Identify user distribution levels
5. identify current and projected user load targets / transactional volume targets
6. Create workload model and validate using Little’s Law

1. Performance modelling is the process of creating performance models
2. Performance models are built early, usually defined during design phase and continuously refined throughout SDLC phase
3. Performance models are built using analytical modelling (queuing theory) or simulation modelling or statistical modelling techniques.
4. performance models are used to evaluate architectural or design trade offs before building full system
5. performance models are used to predict performance behavior of the system by feeding in the performance data captured from the system as its get deployed

1. Requirement specification / use case documents
2. architectural diagrams
3. interview with business stakeholders (functional experts, business analysts, technical SMEs, etc)
4. Questionnaire answered by Beta Users
5. Marketing brochures / release manuals
6. Performance benchmarks of similar applications
7. Use case prioritization index
8. Analysis of end user access pattern by releasing the application to beta users