Well, I'm very much in the line of game design thinking where uninformed decisions are Not Fun. But I don't think SE means for this particular system to deceive anyone; it is likely they haven't been able to give this issue quite as much thought as the most dedicated of theorycrafters, and complicating the program to make a difference that can't really be noticed except by someone measuring carefully is a hard sell as a good use of development time.
Now, I have a tendency to ramble on, so I'm really trying not to make this post into Sunny's Heterodox School of Game Design (trust me, the first day of classes alone gets people angry), but it's gonna get close here.
For starters, there isn't really such a thing as "display it when needed" in modern graphics or game engines. But suffice it to say that if "frames per second" means anything in your system of graphics, you can't. Maybe a proper answer is a lot more complicated than this, but fundamentally, you're queuing up frames so that they show up in a coherent way and within a reliable time period.
I put it this way because it's analogous to how most systems in game engines are architected. Rather than taking every possible thing as it comes, it is far more efficient in terms of memory access patterns and rules of the game's logic for separate engine systems to run at a certain rate (independent yet in concert with other systems) and process its jobs on its own.
Your view of performance seems to be that it should be proportional to CPU processing speed? It's intuitive and it's true if things are loaded in (a very very small portion of) memory, but this view is increasingly discouraged in modern times. Memory access is just so much slower than CPU processing, and the problem has only gotten worse over the years. It affects issues like why load times are still a thing in games even though the hardware is so much better, it's at least two of the reasons why "globals are bad" in programming--and it's at this point I'm going to cut myself off instead of going overly technical but I hope you get the picture: games do things in batches for performance reasons, because they're trying to do millions of different things; but computers run fastest when you do millions of the same thing. Therefore, there must be compromises and shortcuts. You are looking at exactly such a situation here.
Now they could, for trivial processing time, add or subtract time to GCDs in order to match a theoretical time, but changing time components adds complexity to the game logic and may have unintended side effects, so it's not necessarily as trivial as it may sound. And it already appears as though they wanted to make the simplest possible implementation of input on their end, so take that as you will....
Sorry, I never managed to piece that together, or else this thread might've existed many months ago. I started looking into it shortly after figuring out the crit coefficients, but didn't get anywhere. Plus, the point by point testing is not a task I relish....
If my hypothesis is correct and the same method as the OP is used, I think that's what'll happen, yeah.