Performance testing

The challenge with performance testing is that the tests run in a simulated environment.

Simulated environments are fine for other kinds of tests, such as system tests, since certain aspects are abstracted. Mock servers, for example, can simulate behavior similarly to production.

However, unlike in functional tests, validating the system's responsiveness requires to take everything in the environment into account. At the end of the day, applications are running on actual hardware, thus the hardware, as well as the overall situation, impacts the application's performance. The system's performance in simulated environments will never behave equally in production. Therefore, performance tests are not a reliable way of finding performance bottlenecks.

There are many scenarios where an application can perform much better in production compared to performance tests, depending on all the immediate and imminent influences. The HotSpot JVM, for example, performs better under high load.

Investigating performance constraints therefore can only happen in production. As shown earlier, the jPDM investigation processes, together with sampling techniques and tools applied to the production system, will identify the bottleneck.

Performance and stress tests help in finding obvious code or configuration errors, such as resource leaks, serious misconfiguration, missing timeouts, or deadlocks. These bugs will be found before deploying to production. Performance tests can also capture performance trends over time and warn engineers if the overall responsiveness decreases. Still, this may only indicate potential issues but should not lead the engineers to premature conclusions.

Performance and stress tests only make sense in the whole network of interdependent applications. This is because of dependencies and performance influences of all the systems and databases involved. The setup needs to be as similar to production as possible.

Even then, the outcome will not be the same as in production. It's highly important that engineers are aware of this. Performance optimizations that follow performance tests are therefore never fully representative.

For performance tuning, it's important to use investigative processes together with sampling on production instead. Continuous Delivery techniques support in quickly bringing configuration changes to production. Then engineers can use the sampling and performance insights to see whether changing the setup has improved the overall solution. And again, the overall system needs to be taken into account. Simply tuning a single application without considering the whole system can have negative effects on the overall scenario.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.75.217