macOS ◆ xterm-256color ◆ bash 209 views

Make concurrent API fast again: ConcurrentUsage object redux

This is a followup to previous recording 345776 because I’ve made some additional code changes since then, and I wasn’t sure how they might affect a long test like this. Ultimately, any effect was minor or negligible. New DB setup for this run is 346808.

See also the old concurrent benchmarch demo before our recent refactoring to split on arch/role/SLA.

This demo attempts to reproduce the conditions of the larger “laska” account (~45000 instances active through the previous month) and uses the same process as that previous demo to populate the initial data. Because populating the data also takes a very long time, I captured a separate recording of only the data generation and simply reused its DB dump for this recording.

Note: At 2:23, the comment at the top of the screen incorrect says “old code”; it is actually using the new code at this point.

Before reintroducing ConcurrentUsage

Requests to concurrent API with no arguments:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:  1495 1554  37.4   1553    1613
Waiting:     1494 1554  37.4   1553    1613
Total:       1495 1555  37.4   1553    1613

Requests to concurrent API with start_date of yesterday:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing: 35057 37026 975.7  37314   38180
Waiting:    35057 37026 975.7  37313   38180
Total:      35058 37026 975.7  37314   38180

Requests to concurrent API with start_date of 15 days ago:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing: 172192 173020 2154.1 172296  179135
Waiting:    172192 173020 2153.9 172295  179134
Total:      172192 173020 2154.1 172296  179135

After reintroducing ConcurrentUsage

Requests to concurrent API with no arguments:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    10  163 481.7     11    1534
Waiting:       10  163 481.6     10    1533
Total:         10  163 481.7     11    1534

Requests to concurrent API with start_date of yesterday:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    11 3486 10984.2     13   34748
Waiting:       11 3486 10984.2     13   34748
Total:         11 3486 10984.2     13   34748

Requests to concurrent API with start_date of 15 days ago:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    18 18481 58381.7     19  184638
Waiting:       18 18480 58381.6     19  184637
Total:         18 18481 58381.7     19  184638

Conclusions

One can infer from this demo that the first request is still just as slow as before, which is exactly what we expected, but all subsequent requests return on the order of 10 ms.

Additional captured output is here: