I have made a list of the top ten things load testers frequently fail to do that make me feel like smiting them.
- Thou shalt know how thy test tool works.
The worst performance testers I have met were always more concerned about whether they could get their scripts to run, rather than whether the tests they were running were realistic. Read the documentation, practice, spend some time figuring out what all the settings do, then relate how your scripts are running back to how real users exercise your application. - Thou shalt gather realistic usage data.
Garbage in, garbage out. If your transaction volumes are wrong, then your load test is wrong. - Thou shalt have testable requirements.
Non-functional requirements (especially load and performance-related requirements) are usually an afterthought for many projects. This shouldn’t stop you from trying to gather the requirements you need for your tests. The business approach of “let us know how fast it is, and we will let you know if that’s okay” isn’t good enough. Get some numbers. The numbers can change in the future (maybe call them “targets” or “guidelines” rather than “requirements”), but you need something to test against before you start. - Thou shalt write a test plan.
Even if you already know what you’re going to be doing, other people would probably like to know too – they might even be able to help; besides, a signed-off test plan has saved many a tester from the wrath of project management. - Thou shalt test for the worst case.
Don’t test with transactions from an average day, test for the busiest day your business has ever had. Add a margin for growth. Testing failover? A server doesn’t fall over at midnight when no one is using your application (would we care in this situation anyway?), it falls over in the middle of the day when lots of real people are using it. - Thou shalt monitor your test environment infrastructure.
I feel that I have to spell it out, because I still see people who don’t do this. Monitoring your servers allows you to more easily figure out where the problem is. You can also make neat observations like “response times for the new version of the application are the identical to the previous version, but CPU utilisation on the servers has increase by 10%” When I say “monitor your servers”, this includes your load generators. - Thou shalt enforce change control on your environment.
The final thing you tested should be what is deployed into Production – same application version, same system configuration. It’s easy to lose track of what you are actually testing against if people are making uncontrolled changes to your environment, or if people are making tuning changes without tracking what they are changing. Keep a list of changes that are made…even if you are in a hurry; and always make sure you know what you are testing against. - Thou shalt use a defect tracking tool.
An untracked defect is a little like a tree that fall in the forest when no-one is around – no-one cares. Raising defects lets everyone know there is a problem (not just the people who should be working to fix it). It also provides a neat repository to keep track of all the things that have been tried to fix the problem. - Thou shalt rule out thy own errors before raising a defect.
“Oops, my bad!” is a great way to lose credibility with the people who are going to be fixing your defects. If you don’t have credibility, you are going to have to work much harder to convince people that the problem you are seeing is due to a fault with the system rather than a fault with your test scripts. Don’t be so afraid of making a mistake that you test “around” errors (like people who see HTTP 500 errors under load and “solve” the problem by changing their scripts to put less load on the system). It always helps if you have followed commandment #1 Thou shalt know how thy test tool works. - Thou shalt pass on your knowledge.
Write a Test Summary Report and let management know what you found (and fixed) during testing, make some PowerPoint slides, hold a meeting. Let the Production monitoring group know which metrics are useful to monitor, let them re-use your LoadRunner scripts for Production monitoring with BAC. Leave some documentation for future testers; don’t make them gather requirements and transaction volumes again, or re-write all your scripts because they don’t understand them. Retain your test results until you are sure that no-one is going to ever ask about the results of that test you ran all those months ago.
13 Comments
Comments are closed.
All good stuff, thanks!
Chris
You’ve got a bunch of good points here, but there are 3 that I have to comment on.
“3. Thou shalt have testable requirements.” First, I flatly disagree. Quantified requirements are both wrong and misleading (unless they come from a contract or SLA). Second, you already have testable requirements. Happy users. I assure you that I, and those I have taught, can (and do, on a regular basis) do fabulous performance testing without ever judging the “goodness or badness” of a single number. Simple trend analysis and users expressing their feelings about performance in qualitative terms while they are conducting UAT under load is faster, easier and more accurate.
“4. Thou shalt write a test plan.” It’s been years since I’ve used a test plan, and twice as long since I so much as asked someone to sign-off on one. Or maybe I always have a test plan… it goes something like this:
– Figure out why we’re performance testing in the first place.
– Figure out what I’ve got available to test with.
– Figure out what the application does that anyone cares about.
– Design, develop and execute tests based on current risk assessments.
– Analyze, Report, Reassess and Iterate.
What more do you need? Especially if you commit to reporting results and revised priority lists every 24-48 hours.
“8. Thou shalt use a defect tracking tool.” It depends what you mean by “tool”. If “tool” includes whiteboards, email and sticky notes, then I guess I can live with this.
Now, had you listed these as “10 Critical Considerations of Load Testing”, I’d agree that all of these points should be explicitly considered.
—
Scott Barber
President & Chief Technologist, PerfTestPlus, Inc.
Executive Director, Association for Software Testing
http://www.perftestplus.com
http://www.associationforsoftwaretesting.org
sbarber@perftestplus.com
“If you can see it in your mind…
you will find it in your life.”
Yes, We also carry out Load testing of all our Applications using LoadRunner and WAPT. Your Guides are good but Please post Examples. I can give you a 10 hour long theory in Load testing.
Lava Kafle
QA Engineering Manager
http://www.d2hawkeyeservices.com
Scott pointed out well what I wanted to say. Good stuff but I would suggest you to improve it. Let me add only that 2 and 3 disagree with 5 to some extent. What is a point to gather “target” req. and “realistic” data, if you anyway test the worst case?
Response from me shortly.
Here is Brent Strange’s take on commandment #9.
http://qainsight.net/2007/05/15/Crying+Wolf.aspx
Thanks for very useful article.
http://www.testerqa.com
#1 and #10 are so far apart but so tightly coupled. In my experience, people constantly question the test tool when an issue arises (Silk Performer in my day). You need to know your test tool and be able to potray how it works (#10 pass on the knowledge, in a differerent way than you explained) in one of two ways depending on the audience:
1. Technical and in depth
2. Non-technical and dummied down
Good post. Thanks! 🙂
Thou shalt visit http://www.performancewiki.com for short and useful tuning tips.
Hi all,
My script was running perfectly,the next day the same script stopped executing.the error displayed in the agent service log is the following:
Error -10343 : Communication error: Failed to connect to a PROXY Serverwith the following settings:
(-server_port=0)(-server_fd_primary=2)(-server_type=8)(-allowed_msg_size=0)(-allowed_msgs_num=0)(-proxy_configuration_on) (sys error message – ) [MsgId: MERR-10343]
can anyone please tell me how to correct this problem.no proxy server is used.
-26627 : vuser_init.c(19): Error: HTTP Status-Code=404 (/error.jsp) for “https://qa-web-001.sjc1.yumenetworks.com/
the execution log displays the above error.
vuser_init.c(19): web_submit_data highest severity level was “ERROR”, 0 body bytes, 407 header bytes
LoadRunner and Silk are too expensive for us. We like cheap & easy web load testing tool. Uses AWS EC2
Well, Blank Anonymus whatever you are or your name is invisible, It is clear that your proxy was down so your loadrunner scripts did not work