TL;DR at the end if one wishes to respond quickly.
This logic sounds nice, however I don't have a sufficient tech background and experience to give a "definitive answer", so I'll wait for someone to do that. I like the elegance of it, mathematically. ^^ However I can see a few concerns, based on my personal experience as a gamer, and what little I think I know (or assume… ^^) about networks.
________
Consider the following example:
• A, B, C, D are regular positional checks (each .3s)
• "exit" and "re-entry" are the key moments to determine whether or not a player might be hit by the spell that will occupy the red area.
• you can observe the deviation between a "perfect" escape (straight thin black line) and the actual path of an average player (curved thick black line).
This is a fairly realistic example of what a player does I suppose: starting from A, you move loosely in a straight line, passing by B, then C, and finally come back close to the mob to keep fighting.
Now in this example, one exits about .5s after seeing the AoE, but one fails to wait enough and re-enters the AoE before the circle (or rather, enemy cast bar) disappears, about .8s after seeing the AoE. Hence, one should be hit when the AoE fires, about 1s after the AoE appeared on one's screen (that's anything from 1.02s to 1.3s after the red circle appeared from the point of view of the server, depending on one's internet latency).
If I understand correctly, you suggest that in addition to regular positional checks (A…D), the client sends a timestamped additional check at the exact time of exiting (~.5s) —and, subsequently, at the exact time of re-entry, should it happen (here at ~.8s). Right?
________
Here are my concerns:
1 • The "simple math" (difference between A and "exit point") is not that easy, for you need twice the data currently exchanged: you're going to compare between the straight line theoretical minimum and the actual difference between A and the "exit" timestamp, but for this to happen the client needs to send both the "exit" timestamp and the timestamp of when it received A (and displayed the red area to the player) as well (or else you don't solve the latency problem between A on the server and A on the client).
2 • "Zigzags" may add a significant strain on the servers (here +50%; at least +25% in an ideal scenario, and it adds stress to the clients too —see below PS3 considerations in my 4th point).
This is an unpredictable stress, because you don't what players will do, and they might try (especially in PvP) to flirt with the limits (at the risk of being hit) to beat mobile opponents. Actually, the whole problem remains since in this server-live state model, everything you see is "in the past" on your client (your solution doesn't change that fact, it merely compensates for it by interpolating, which is probably the usual method, but it works much better in a client live-state model). Trying to compensate for that discrepancy ourselves, we actually intuitively tend already to re-enter former red areas asap, knowing that we're free of the consequences of the spell animation itself, and that we've already lost a lot of time just waiting for the AoE indicator to appear (we try to compensate for loss of DPS, tank threat, etc.)
Never mind that it further breaks immersion, which is related but yet another topic entirely
3 • Assuming this game does indeed use TCP instead of UDP, all checks must happen in order (no asynchronous resolution of requests or reconstruction of situations), therefore bottlenecking the whole succession of events (can't perform check C before "exit" is received for instance).
This might seem trivial, but how can the server know that a client-prompted "exit" check is about to happen? if "exit" is too close to C, it won't be taken into account (it would violate TCP order, and if causality is violated would force a short rollback, which is probably way too much stress for this server model; and at the very least could cause rubber banding and freezes).
Now taking into account the "re-entry" point, the server might have sent a "clear" confirmation to the client (which must happen before C); then the client must send a timestamped re-entry check request, and we're back with order/causality issues before check D happens and is sent to the client. Without UDP, it must all happen in order, there's no reconstruction or fixed displaying delay.
All things considered, I think it's a strong argument explaining why you can't have server live-state and client live-state simultaneously (regardless of TCP/UDP), it's just not an option for it would violate causality all over the place. Who gets the final word, client or server? It has to be one or the other. I think.
4 • A solution might be to delay C & D validations for "some time" before sending to the clients (perform the check but wait "some time") to make sure the client isn't about to send an additional check request; it may work but then this is equivalent to adding "some time" latency — which this game probably doesn't need.
Also, more client requests = more stress, as stated already, and from what we read of the developers, the PS3 clients are already pushing the limits of the console. It probably can't perform these additional check requests without impacting the display of the flow events (rubber banding issues likely to happen). And actually this lack of PS3 resources might explain why they went with server live-state (the console probably just can't be relied upon for authority). I'd very much like to hear Sinth (or anyone else versed in MMOG client/server sync) about this.
________
TL;DR: This method of yours addresses the discrepancy between "server reality" and "client delay" at the heavy cost of sending (probably too much) more data, which is that much unpredictable stress for (some) clients and servers. A ping of 200-300 ms, not rare when there's only 1 datacenter for each of Earth's hemispheres (and furthermore none for the southern one), may be enough to break the whole solution you propose, since such a ping is potentially more than the regular (TCP-ordered) positional check already in place in this server live-state model —back to causality issues for server live-state regular checks (A…D) to happen, short of (too) many rollbacks, which would probably result in other issues such as rubber banding display on the most latency-impacted clients, or resource-limited ones.
I would refer to this old post I made on the issue, before I knew about live-states: not my opinion, I was just sharing interesting sources (a programmer named Gabriel Gambetta, and a Valve doc). I suppose it provides an illustration of timestamp methodology and discrepancy between server/client point of view, and I now assume these sources consider a client live-state model in which the client has the authority (only to be confirmed or invalidated by the servers).
I really don't know if this post made sense. Which is why I tried to put the "concerns" in bold, but you can probably ignore the "explanations" I provide as they must be deeply flawed. haha ^^;