When looking at SpeedCurve's synthetic monitoring product you pick a package of monthly "checks" that you spend against a combination of urls, devices and locations. The standard packages come with 25,000 or 50,000 checks but how you use them will vary hugely depending on the number of pages, from how many devices and locations, and frequently you want to check them.
SpeedCurve's pricing page includes a few examples depending on if you're focusing on a responsive redesign where you focus on multiple pages and devices, competitive benchmarking focusing on a large number of sites or global monitoring focusing more on the locations you run your tests from.
There is already detailed instructions around checks on the SpeedCurve documentation, but to help me understand these different combinations in more detail I created a simple budget calculator Google Sheet. This allows me to configure the various variables to give me a breakdown of how many pages I can test. It includes both pages under regular monitoring, and pages tested after a deployment.
Synthetic check variables
To work out how many pages you can test you'll need to take the number of checks you've got and divide it by the values you've chosen for the various variables. If you also want to trigger ad-hoc tests on doing a deployment you'll also want to allow for this – the spreadsheet includes some fields for configuring this. This is the general calculation
[number of checks] / [number of runs per day] / [checks per run] / [locations] / [devices].
Number of runs per day
For your synthetic monitoring you need to decide at which time of day you want the tests to run, and if you want this to happen multiple times per day. If your pages are fairly static and don't change too much per day, you may only want to test once a day whereas if your content changes a lot throughout the day then you may want to mutiple runs per day
Checks per run
When it comes to performance testing once of the biggest challenges is the inherent variability that means test runs vary a lot between runs. One of the ways SpeedCurve helps to mitigate against this is by loading each URL multiple times and taking the median result so that any particuarly slow or fast runs don't impact your results too much.
Each test requires a minimum of 3 checks, and for most people this is probably reasonable if you still get a lot of variance you can increase this further.
When running your checks the location they are run from will make a difference. Ideally, you should be running these from locations that mirror the ones your customers are using. SpeedCurve provides 15 different locations around the world.
SpeedCurve comes with a range of desktop browsers and emulated phones and tablet browsers, along with the capability to define any custom browser profiles you'd like including their network conditions.
Using multiple different browsers is a really useful way of understanding how performance varies on different devices. Try configuring a range from modern mobiles on fast connections through to lower-end devices with slower connections.
I found this spreadsheet to be a useful way of experiementing with the different SpeedCurve configurations and proved a great starting point when getting setup. Hopefully sharing it will help you understand how to really make the most of the checks you've got available.
Cover photo by Ashraf Ali on Unsplash.