macOS ◆ xterm-256color ◆ bash 211 views

Make concurrent API fast again: ConcurrentUsage object redux

See also the old concurrent benchmarch demo before our recent refactoring to split on arch/role/SLA.

This demo attempts to reproduce the conditions of the larger “laska” account (~45000 instances active through the previous month) and uses the same process as that previous demo to populate the initial data. Because populating the data also takes a very long time, I captured a separate recording of only the data generation and simply reused its DB dump for this recording.

Note: At 2:28, the comment at the top of the screen incorrect says “old code”; it is actually using the new code at this point.

Before reintroducing ConcurrentUsage

Requests to concurrent API with no arguments:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing: 16073 16503 510.5  16253   17442
Waiting:    16073 16502 510.5  16252   17442
Total:      16073 16503 510.5  16253   17442

Requests to concurrent API with start_date of yesterday:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing: 34559 35188 1132.3  34672   37391
Waiting:    34558 35188 1132.3  34672   37391
Total:      34559 35188 1132.3  34672   37391

Requests to concurrent API with start_date of 15 days ago:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing: 191448 191760 186.9 191771  192021
Waiting:    191447 191760 186.9 191770  192020
Total:      191448 191760 186.9 191771  192021

After reintroducing ConcurrentUsage

Requests to concurrent API with no arguments:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    10 1595 5010.4     11   15855
Waiting:        9 1595 5010.4     10   15855
Total:         10 1595 5010.4     11   15855

Requests to concurrent API with start_date of yesterday:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    10 1904 5988.3     11   18947
Waiting:       10 1904 5988.2     10   18947
Total:         10 1905 5988.3     11   18948

Requests to concurrent API with start_date of 15 days ago:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    18 19226 60733.8     21  192078
Waiting:       18 19226 60733.8     20  192077
Total:         18 19226 60733.8     21  192078

Conclusions

One can infer from this demo that the first request is still just as slow as before, which is exactly what we expected, but all subsequent requests return on the order of 10 ms.

Additional captured output is here: