macOS ◆ xterm-256color ◆ bash 195 views

This recording demonstrates a very rough benchmarking of cloudigrade’s concurrent usage API with a few different sets of backing data. This recording was captured entirely in real-time; no “idle time skip” was set for it.

In this demo, I aimed to recreate the conditions of two known large real-world cloudigrade AWS accounts that we observed during the 2018-2019 pilot release. The “jnewton” and “laska” accounts were both large enough to force us to rethink some of our assumptions and implement changes for optimization and serve as points of reference.

Although we have screenshots of the graphs presented for those two accounts, we no longer have their raw underlying data, and the data represented in the screenshots show for one month only the total number of RHEL images, RHEL instances, and hours used by those instances; the screenshots do not include any representation of concurrent usage. I am generating pseudorandom activity to match the known numbers in three different ways that would produce different rates of concurrency. This results in six different data sets here:

  • jnewton-like data with a small number of events per instance
  • jnewton-like data with a medium number of events per instance
  • jnewton-like data with a large number of events per instance
  • laska-like data with a small number of events per instance
  • laska-like data with a medium number of events per instance
  • laska-like data with a large number of events per instance

Very important factor to consider in context: All of this is running locally on a 2015 MacBook Pro. That includes the postgresql database, cloudigrade itself, and the shell running these commands. A more realistic benchmark would use AWS RDS for the database and an openshift.com deployment for cloudigrade. I may revisit and attempt with those conditions later.