Always care for performance – for individual users, too
Performance tests are usually planned with the most possible load by huge groups of users simultaneously. Sometimes, however, even one of them can break all our hard work, if they behave in an unforeseen manner.
While planning performance tests, we focus mostly on functionalities used by dozens or even more users. In my book, Software testing in practice, I wrote about possible dangers of such approach and how it can lead to dissatisfaction with mobile apps’ performance of smaller groups of users. It is worth to analyze functions used by single users and think who, when and how may want to use them. Some functions may not be used often, but they may be key in some situations. Lately, when I conducted a performance testing training, I learned two interesting examples that my trainees had to deal with and I would like to talk about them now – because they are related to a very important topic.
Let’s start with an example from the book. While testing a CMS app, we focused on performance of functions presenting the data saved in the service. We assumed that modifying functions would not be executed more than once a day and are irrelevant, when it comes to pure performance. This assumption was true at times of normal use. We did not notice, however, that after the delivery of the system, the client would want to upload large quantities of data into the empty service. Not the best performance of editorial functionalities could be observed even at a low number of active users. The result was obvious: slow process of the service preparation. It caused us a lot of additional work before releasing the new system.
During the training, the trainees told me a funny story about some problems they had with a CRM system they were building. Once a large group of users reported problems with the speed of the system. The administrator noticed a suspicious process that consumed a lot of system’s resources and decided to closed it. It seemed to end the problem, but after a short while it all started again. A thorough analysis showed that this process was started with a report generation ordered by one of the users. This user, knowing that an important and required report did not generate, ordered the whole activity to start again, thus rendering it unable for the others to use the system. Unfortunately, after this problem came to light, generating reports had to be reserved for weekends only.
The second story concerned an online application for translation works optimization. This type of software divides the translated text into segments and if it finds ones similar to those already translated, it suggests ready to use translations. To assess the similarity, it is required to prepare some complex translations beforehand. Such a solution is generally acclaimed by the users. This time, however, some problems arose around the final stage of a big project, when many translated segments were already collected. I came out that the system’s suggestions appeared too slow and the users had to wait for them even up to several seconds. It was just faster to do the work manually! The dissatisfaction was even stronger, because the system failed at the end of a huge project, so when it was most needed.
How to avoid such problems, then? The answer is hard, because each application is different. I think, that in tis the most important to be aware that performance and speed related problems can appear even, when the application is used by just a few users. It is worth to look at the systems we build with a healthy dose of critical thinking and imagination as well as to consider many possible scenarios that can come to life after the release. Contact with the users should be very helpful, too.
Jacek Okrojek – Tester, coordinator and director of tests with many years of experience at testing high availability systems. As a quality assurance consultant, he lead and took part in many complex projects for medical, telecommunication and investment banking sectors. He worked with integration, performance and user acceptance tests. Currently, he works in TestArmy as a performance testing and testing automation expert.