Software performance requirements are like happiness: everyone needs it, but it differs for everyone ;)
- We need to formulate a task, which we are going to work on;
- We agreed to use numbers while talking about performance.
A pretty big part of my dear readers found this post by googling “performance requirements” because they need these requirements. The only question is
How to get performance requirements?
Let’s use a top-down approach, i.e. go from high-level requirements to low-level details.
First, you answer questions like these: "Why am I starting this performance activity? What issue should I resolve for business?" or "Why did my boss ask me to work on this? What result does he expect to get in the end?".
Second, you are starting to think as a technical person and trying to understand how you will work on this exactly, i.e. how you will create tests, what results you should get in these tests to make everyone happy, what tests you will create exactly to reflect reality, etc. Say:
- Which scenarios/functionality will you take into account?;
- What are critical metrics on these scenarios? (ops/sec, response time in milliseconds, etc);
- What are required values of these metrics?;
- What hardware do you use in production environment? Do you have a chance to use the same hardware in tests?;
- What should be resource utilization on your servers? What is availability requirement? 24×7? Is your application clustered? If so we can weaken our requirements for each of the nodes, perhaps, etc.;
- You can even move further to formulate requirements for each of the components of your application. This is up to you. I don’t like a component performance testing and micro-benchmarking (let’s talk about this in further posts), so I would not do this.;
When should you stop asking/answering? – When you formulated all the requirements. (For example, when you have clear understanding how to write tests and interpret their results.)
Okay, what should you do if you don’t know answers to some of the questions mentioned above, say, what scenarios you should use or what resource utilization should be?
First of all, ask your colleagues. Say, the analysts/developers could help you with the choice of the most important scenarios or your boss could specify this task in more details (perhaps, he has more details in his mind and is waiting when you ask this question to make sure you started to work on this :)). By the way, all the performance requirements can be already formulated just someone forgot to send you this document :).
But, if nobody can help you just document your assumptions and make sure everyone had a chance to object in a constructive way. So, either you improve you assumptions based on their ideas or leave it as is. At least, you notified everyone and documented everything. If you are mistaken somewhere life will correct you. But it’s much better than handwaving.
Let’s consider a couple of examples to clarify the points mentioned above.
Say, I’m going to improve performance of a J2EE application, which doesn’t keep the required (in particular, current) user load.
In other words, I need to formulate performance requirements for the upcoming releases of this application, i.е. formulate what requirements a release should meet to work in the production environment successfully.
Even simpler. What tests should I create and what results should I get on these tests to have no issues in production environment (or reduce their probability)?
(Let’s think I have a chance to run my tests on the environment which is exactly as a production one.)
Okay, let me start:
- The application has dozens of scenarios. Fortunately, the most of them are called once a month. So, I will forget about rare scenarios and use only frequent ones according to access logs;
- How often should I call these scenarios in tests? Once again, I can take this information from production statistics or ask business people for more information to compute expected load on new scenarios.;
- What should be response time on each of these scenarios? As there is a web portal response time should be comfortable for our users. Let me require, say, less than 1 second;
- How long should I run these tests? I know application should work at least from 5 am till 11pm without reboot. So I should check it works without any issues for 18 hours. (Actually, you can increase load in tests a little bit to reduce run time. This isn’t a quite honest manipulation, but it’s possible under some circumstances).;
- What resource utilization should be on my servers during tests run? First of all, it should be stable (otherwise, you have potential emergency in your production environment. Say, memory leak or something else). I mean of course, it will increase in first minutes/hours but then it should be stable. Second, let’s keep maximum utilization of all resources less than 70% (cpu and memory).
This is something to start with. These requirements may be incomplete or even wrong. Life will correct me. For example, from my own practice, which corrections are possible:
- I can change the set of scenarios. I must review this set each release if my application changes.;
- I can change requirements on ops/sec because statistics for some of the scenarios changed. Or some of the scenarios are going to be more frequently used;
- Or even I can change metrics from ops/sec to ops/hours. Because I see that using ops/hours allows emulating the user load more accurately;
- Response time. Of course, even 1 second can be too much for users to feel comfortable. However, your bosses can think another way 😉 They produce releases to make some functional features. They like to remember about performance only when something happened in the production. So, even 1 second response time as a limitation may not work, unfortunately. And you will weaken this requirement, say, to 3 seconds. Or you can leave 1 second requirement only for 2 scenarios. Or you can move from "average" to "95%".;
- Resource utilization. You can note that 70% as a maximum is too high to keep your application without emergencies the whole day. Say, you will correct it to 60%. Or separate requirements (memory, cpu, network, etc). Or one day may be your application will be clustered, so you can weaken requirement on resource utilization. Or one day your boss will come to you and say: "My dear performance engineer, let’s keep an average utilization about 95%. I wouldn’t pay for these percents of utilization, which we don’t use". You can add requirements on other resources, say, connection pools, thread pools, etc.
You were asked to create a benchmark (B) to measure a third party software, say, component A. The B run results could help you to make a decision whether it’s possible to use A in your system or not (of course, this is not the only way to make this decision. ).
What are performance requirements for your benchmark application?
- You have to choose scenarios of A, which you would like to keep under stress. You can base your choice either on statistics of working system or your assumptions. Document your choice (say, scenarios X, Y, Z) and why you made this choice;
- You have 3 scenarios. Are they equally important? Or X should be called 3 times more than Y, Z should be called 10 times more than X?
- What should B measure? Is it important how many operations per seconds you can get with A on X, Y, Z? Is response time important? Average? Ok, document;
- Should B be run on the same machine as A or on another (say, there are 2 options possible)? If we consider your benchmark as a single system, A should be the bottleneck of this system (as we would like to measure A rather than B or something else). So either B leaves enough resources to A (cpu, memory, network, everything) all the time or optimize B or let it go to another machine;
- How quickly should B work? Consider a very simple flow: (a)B creates requests to A-> (b)A handles it and sends responses to B->(c)B checks the results. How much should (a) and (c) take? It depends on whether you count responses from A either including (a) and (c) or not. If you include these points, let’s require, say, 5% of (b) (more exactly, what we do expect to get on (b));
- How long will B run? I don’t know? How long are going to use A without restart? That’s the answer;).
Let’s see what we got:
- B should emulate scenarios X, Y, Z;
- B should count ops/sec, average response time for X,Y, Z separately;
- B should limit the number of requests per seconds to keep Xops/sec = Y ops/sec * 3 = Z ops/sec * 1/10;
- B should be run on the same machine as A and take less than 5% of resources in comparison with A.
Actually, our example isn’t very specific, so we can only assume some details. But I hope you see how we think while formulating these requirements.
Don’t hope Google or your friends will share with you performance requirements ready for your usage …
First of all, let me tell you a story.
One of my colleagues (we worked together for company Y), Mr. Kh., asked me what performance requirements exactly I used working for Y. He had left Y and worked for another company by this time with completely different business, applications, etc. As he explained me he would like to use my requirements for his new company to improve situation with performance there.
At this moment I remembered Mr. Kh. isn’t even a technical specialist now, just a boss. Sorry, all the bosses who are reading this now ;). So perhaps, he will just pass my requirements to his subordinates as "the only true requirements". So our talk boiled down to common explanation how to get performance requirements as I would not like to degrade performance in his new company telling him more specific information. 🙂
Why? Because I know nothing about their business, their customers, their production environment and their application. How can we hope my solution will solve his task if we don’t know what the task is?
So, please don’t try to search for ready performance requirements for your application and business needs! (Sorry if it complicates things for some of you. 😉 )