
Originally Posted by
einwol
is it likely that SE will respond to so many people having 90k issues. The solution may be a slight change to our routers, but I see no official document on how to troubleshoot error codes on our end. I'd be happy to try things with some guidance.
help us SE, you're our only hope
It's been laid out many times in these connectivity threads...by myself and others. This post for Blizzard about testing the route to their servers is often referenced as well:
Running a Trace Route
It isn't usually so much an issue with your router, but more issues with how your ISP and/or their routing partners (Level3, Cogent, TATA, TiNet/SPA/GTT, or Verizon Business/ALTER.net) in order to get you to Ormuco (SE's ISP). Granted, there are times when it is something with SE's network or server configs--but the majority of the time we are finding signs of over-congested or flat out stalled hops in the routing along the way.
This is why using a VPN has been so successful for so many---it allows you to adjust the parameters of your routing to try to find a cleaner path to Ormuco.
Edit:
Just for a simple demonstration, here is my route tonight through my ISP (keep in mind, I've been working with them on cleaning things up... it used to be a nightmare before):
Code:
C:\Windows\System32>tracert neolobby02.ffxiv.com
Tracing route to neolobby02.ffxiv.com [199.91.189.74]
over a maximum of 30 hops:
1 2 ms <1 ms 1 ms LPTSRV [10.10.100.1]
2 35 ms 27 ms 47 ms cpe-75-176-160-1.sc.res.rr.com [75.176.160.1]
3 26 ms 30 ms 30 ms cpe-024-031-198-005.sc.res.rr.com [24.31.198.5]
4 15 ms 15 ms 14 ms clmasoutheastmyr-rtr2.sc.rr.com [24.31.196.210]
5 27 ms 26 ms 26 ms be33.drhmncev01r.southeast.rr.com [24.93.64.180]
6 33 ms 36 ms 33 ms bu-ether35.asbnva1611w-bcr00.tbone.rr.com[107.14.19.42]
7 29 ms 42 ms 31 ms 0.ae2.pr1.dca10.tbone.rr.com [107.14.17.204]
8 54 ms 47 ms 46 ms ix-17-0.tcore2.AEQ-Ashburn.as6453.net [216.6.87.149]
9 89 ms 92 ms 88 ms if-2-2.tcore1.AEQ-Ashburn.as6453.net [216.6.87.2]
10 92 ms 93 ms 91 ms 64.86.85.1
11 75 ms 75 ms 80 ms if-10-2.tcore1.TTT-Toronto.as6453.net [64.86.32.33]
12 90 ms 97 ms 98 ms if-9-9.tcore1.TNK-Toronto.as6453.net [64.86.33.25]
13 94 ms 92 ms 93 ms if-7-2.tcore1.W6C-Montreal.as6453.net [66.198.96.61]
14 91 ms 91 ms 88 ms 66.198.96.50
15 89 ms 96 ms 92 ms 192.34.76.2
16 87 ms 92 ms 94 ms 199.91.189.234
17 94 ms 96 ms 93 ms 199.91.189.74
And here is a trace to the same server from the same laptop on my same wifi when using TunnelBear to their US location (looks like it is in the NY area):
Code:
Tracing route to neolobby02.ffxiv.com [199.91.189.74]
over a maximum of 30 hops:
1 78 ms 78 ms 81 ms 172.18.11.1
2 73 ms 73 ms 85 ms 107.170.26.254
3 86 ms 77 ms 75 ms 192.241.164.233
4 78 ms 78 ms 72 ms 66.110.96.21
5 107 ms 108 ms 101 ms 63.243.128.121
6 104 ms 106 ms 103 ms if-5-5.tcore1.NYY-New-York.as6453.net [216.6.90.5]
7 104 ms 107 ms 132 ms if-11-2.tcore2.NYY-New-York.as6453.net [216.6.99.1]
8 120 ms 107 ms 104 ms if-12-6.tcore1.CT8-Chicago.as6453.net [216.6.99.46]
9 109 ms * 102 ms if-22-2.tcore2.CT8-Chicago.as6453.net [64.86.79.1]
10 101 ms 101 ms 107 ms if-3-2.tcore1.W6C-Montreal.as6453.net [66.198.96.45]
11 114 ms 114 ms 105 ms 66.198.96.50
12 99 ms 105 ms 97 ms 192.34.76.2
13 141 ms 161 ms 162 ms 199.91.189.234
14 105 ms 105 ms 109 ms 199.91.189.74
Trace complete.
And this is when I tunnel to their Canada location (goes to OVH who has weird peering setup, and shoots me back to the US first--goes to show that sometimes the nearest VPN location is not always the best choice...and the same can be said for BGP routing and such the ISP's use):
Code:
Tracing route to neolobby02.ffxiv.com [199.91.189.74]
over a maximum of 30 hops:
1 75 ms 84 ms 73 ms 172.18.12.1
2 140 ms 75 ms 73 ms 192.99.8.252
3 73 ms 74 ms 71 ms bhs-g2-a9.qc.ca [198.27.73.95]
4 80 ms 85 ms 88 ms mtl-2-6k.qc.ca [198.27.73.6]
5 * * 88 ms chi-2-6k.il.us [198.27.73.180]
6 * * * Request timed out.
7 224 ms 228 ms 95 ms if-22-2.tcore2.CT8-Chicago.as6453.net [64.86.79.1]
8 100 ms 93 ms 91 ms if-3-2.tcore1.W6C-Montreal.as6453.net [66.198.96.45]
9 100 ms 101 ms 103 ms 66.198.96.50
10 100 ms 104 ms 102 ms 192.34.76.2
11 102 ms 100 ms 103 ms 199.91.189.234
12 98 ms 101 ms 99 ms 199.91.189.74
Trace complete.
And last but not least is their tunnel to the UK (Netherlands, then London)...just for the helluvit:
Code:
Tracing route to neolobby02.ffxiv.com [199.91.189.74]
over a maximum of 30 hops:
1 116 ms 121 ms 120 ms 172.18.12.1
2 121 ms 118 ms 122 ms 46.101.0.254
3 123 ms 121 ms 119 ms 5.101.111.229
4 117 ms 119 ms 121 ms xe-0-0-0-17.r02.londen01.uk.bb.gin.ntt.net [83.231.181.125]
5 120 ms 122 ms 123 ms ae-6.r02.londen03.uk.bb.gin.ntt.net [129.250.3.2]
6 117 ms 122 ms 123 ms ix-5-0.tcore1.LDN-London.as6453.net [195.219.83.185]
7 220 ms 218 ms 209 ms if-17-2.tcore1.L78-London.as6453.net [80.231.130.129]
8 242 ms 215 ms 215 ms if-2-2.tcore2.L78-London.as6453.net [80.231.131.1]
9 215 ms 217 ms 217 ms if-20-2.tcore2.NYY-New-York.as6453.net [216.6.99.13]
10 218 ms 219 ms 215 ms if-12-6.tcore1.CT8-Chicago.as6453.net [216.6.99.46]
11 218 ms 214 ms 216 ms if-22-2.tcore2.CT8-Chicago.as6453.net [64.86.79.1]
12 220 ms 212 ms 218 ms if-3-2.tcore1.W6C-Montreal.as6453.net [66.198.96.45]
13 218 ms 220 ms 214 ms 66.198.96.50
14 214 ms 214 ms 340 ms 192.34.76.2
15 219 ms 218 ms 213 ms 199.91.189.234
16 222 ms 217 ms 217 ms 199.91.189.74
Trace complete.
Each of those traces was from the same starting point to the same ending point, on the same system, same local router/modem combo--the only thing different was the use of a VPN to alter how things were routed. And, as you can see the results varied considerably based on how it was routed. For some odd reason, they all still ultimately used TATA (as6453) for the last peer towards Ormuco. Usually you'll see it change to Level3 or Cogent sometimes--which is sometimes what is needed, switching to an alternate peer because their exchanges are borked from higher traffic.
Notice how they all ultimately wind up at 66.198.96.50 and feed into 192.34.76.2 (the VPN ones also went to 66.198.96.45, but my ISP's route used 66.198.96.61). That is the hand-off from TATA into Ormuco (SE's ISP). From 4 different locations on two different continents, they ALL got shunted through the same third-party ISP and through the SAME exchange point. This happens often...and if there isn't enough bandwidth on either side of that exchange it creates an opportunity for a major bottleneck that can quickly spiral out of control and result in congestive failures. That isn't an election done by SE---that is the ISP's policies making those routing choices.
The same applies to your ISP. They decide not only how you are routed within their own network (which they can alter IF they wanted to), but also who you use to get from their network to SE's ISP's network---and they usually have multiple options on who that peering partner is. If they can't work out any issues with the current peer, they can switch you to another to see if it improves.