How can I performance test using shell scripts - tools and techniques?
I have a system to which I must apply load for the purpose of performance testing. Some of the load can be created via LoadRunner over HTTP.
However in order to generate realistic load for the system I also need to simulate users using a command line tool which uses a non HTTP protocol* to talk to the server.
* edit: actually it i开发者_开发百科s HTTP but we've been advised by the vendor that it's not something easy to record/script and replay. So we're limited to having to invoke it using the CLI tool.
I have the constraint of not having the licences for LoadRunner to do this and not having the time to put the case to get the license.
Therefore I was wondering if there was a tool that I could use to control the concurrent execution of a collection of shell scripts (it needs to run on Solaris) which will be my transactions. Ideally it would be able to ramp up in accordance with a predetermined scehdule.
I've had a look around and can't tell if JMeter will do the trick. It seems very web oriented.
You can use bellow script to trigger load test for HTTP/S requests,
#!/bin/bash
#define variables
set -x # run in debug mode
DURATION=60 # how long should load be applied ? - in seconds
TPS=20 # number of requests per second
end=$((SECONDS+$DURATION))
#start load
while [ $SECONDS -lt $end ];
do
for ((i=1;i<=$TPS;i++)); do
curl -X POST <url> -H 'Accept: application/json' -H 'Authorization: Bearer xxxxxxxxxxxxx' -H 'Content-Type: application/json' -d '{}' --cacert /path/to/cert/cert.crt -o /dev/null -s -w '%{time_starttransfer}\n' >> response-times.log &
done
sleep 1
done
wait
#end load
echo "Load test has been completed"
You may refer this for more information
If all you need is starting a bunch of shell scripts in parallel, you can quickly create something of your own in perl with fork, exec and sleep.
#!/usr/bin/perl
for $i (1..1000)
{
if (fork == 0)
{
exec ("script.sh");
exit;
}
sleep 1;
}
For anyone interested I have written a Java tool to manage this for me. It references a few files to control how it runs:
1) Schedules File - defines various named lists of timings which controls the length of sequential phases.
e.g. MAIN,120,120,120,120,120
This will result in a schedule named MAIN
which has 5 phases each 120 seconds long.
2) Transactions File - defines transactions that need to run. Each transaction has a name, a command that should be called, boolean controlling repetition, integer controlling pause between repetitions in seconds, data file reference,schedule to use and increments.
e.g. Trans1,/path/to/trans1.ksh,true,10,trans1.data.csv,MAIN,0,10,0,10,0
This will result in a transaction running trans1.ksh, repeatedly with a pause of 10 seconds between repetitions. It will reference the data in trans1.data.csv. During phase 1 it will increment the number of parallel invocations by 0, phase 2 will add 10 parallel invocations, phase 3 none added and so on. Phase times are taken from the schedule named MAIN.
3) Data Files - as referenced in the transaction file, this will be a CSV with a header. Each line of data will be passed to subsequent invocations of the transaction.
e.g.
HOSTNAME,USERNAME,PASSWORD
server1,jimmy,password123
server1,rodney,ILoveHorses
These get passed to the transaction scripts via environment variables (e.g. PASSWORD=ILoveHorses
), a bit klunky, but workable.
My Java simply parses the config files, sets up a manager thread per transaction which itself takes care of setting up and starting executor threads in accordance with the configuration. Managers take care of adding executors linearly so as not to totally overload it.
When it runs, it just reports every second on how many workers each transaction has running and which phase it's in.
It was a fun little weekend project, it's certainly no load runner and I'm sure there are some massive flaws in it that I'm currently blissfully unaware of, but it seems to do ok.
So in summary the answer here was to "roll ya own".
精彩评论