Install K6 performance testing
k6 is a developer-centric, free and open-source load testing tool built for making performance testing a productive and enjoyable experience.
Using k6, you'll be able to catch performance regression and problems earlier, allowing you to build resilient systems and robust applications.
Use cases
k6 users are typically Developers, QA Engineers, and DevOps. They use k6 for testing the performance of APIs, microservices, and websites. Common k6 use cases are:
Load Testing
k6 is optimized for minimal consumption of system resources. It’s a high-performance tool designed for running tests with high load. You can use it for running tests with a high load (spike, stress, endurance tests) in pre-production and QA environments.
Performance monitoring
k6 provides great primitives for code modularization, performance thresholds, and automation. These features make it an excellent choice for performance monitoring. You could run tests with a small amount of load to continuously monitor the performance of your production environment.
Installation Instructions
Follow install instructions here: https://k6.io/docs/getting-started/installation
Writing tests
This sample test calls the api/health endpoint on gamma.voluntarily.nz for 5 minutes. using 200 virtual users running as fast as possible.
The check block allows verification that the response is valid and transaction time is below a target value (here 1 second).
import http from 'k6/http'
import { check, sleep } from 'k6'
const host = 'https://gamma.voluntarily.nz'
export const options = {
vus: 200,
duration: '300s'
}
export default function () {
const res = http.get(`${host}/api/health`)
check(res, {
'status was 200': r => r.status === 200,
'transaction time OK': r => r.timings.duration < 1000
})
sleep(1)
}
Running Tests
K6 performance tests are stored in x/perf/k6
k6 run x/perf/k6/api_health.js
Typical output
/\ |‾‾| /‾‾/ /‾/
/\ / \ | |_/ / / /
/ \/ \ | | / ‾‾\
/ \ | |‾\ \ | (_) |
/ __________ \ |__| \__\ \___/ .io
execution: local
output: json=x/perf/k6/output/api_health.json
script: x/perf/k6/api_health.js
duration: 30s, iterations: -
vus: 200, max: 200
done [==========================================================] 30s / 30s
✓ status was 200
✓ transaction time OK
checks.....................: 100.00% ✓ 10516 ✗ 0
data_received..............: 3.4 MB 114 kB/s
data_sent..................: 551 kB 18 kB/s
http_req_blocked...........: avg=36.89ms min=0s med=1µs max=1.08s p(90)=1µs p(95)=1µs
http_req_connecting........: avg=3.49ms min=0s med=0s max=120.69ms p(90)=0s p(95)=0s
http_req_duration..........: avg=119.72ms min=54.47ms med=83.04ms max=849.47ms p(90)=243.28ms p(95)=344.75ms
http_req_receiving.........: avg=383.49µs min=36µs med=269µs max=15.28ms p(90)=913µs p(95)=1.11ms
http_req_sending...........: avg=50.17µs min=20µs med=42µs max=795µs p(90)=73µs p(95)=97µs
http_req_tls_handshaking...: avg=26.3ms min=0s med=0s max=782.27ms p(90)=0s p(95)=0s
http_req_waiting...........: avg=119.28ms min=54.14ms med=82.66ms max=849.07ms p(90)=241.8ms p(95)=343.65ms
http_reqs..................: 5258 175.265303/s
iteration_duration.........: avg=1.16s min=1.05s med=1.08s max=2.85s p(90)=1.26s p(95)=1.36s
iterations.................: 5078 169.26535/s
vus........................: 200 min=200 max=200
vus_max....................: 200 min=200 max=200
Key metrics are:
http_req_duration..........: avg=119.72ms
iterations.................: 5078 169.26535/s
This means the test request took on average 120ms - as expected. and that we were able to sustain 170 requests per second without errors.
Note that the deployment uses autoscaling. So when under load it may ramp up the number of servers available from 1 to 10. This means that an initial test might return a low result, possibly even error responses or too slow responses but that when run a second time more servers will be running and the test will pass ok.
To avoid longer running tests should ramp up the number of requests slowly. or disable autoscaling and set the service to a fixed number of servers.