Results -9 to 0 of 212

Threaded View

  1. #26
    Player
    Marishi-Ten's Avatar
    Join Date
    Aug 2013
    Location
    Gridania
    Posts
    332
    Character
    Marishi Ten
    World
    Diabolos
    Main Class
    Weaver Lv 50
    Quote Originally Posted by maxcer0081 View Post
    Can you possibly give us an update on the 90000 error causing this problem?
    I'm linking a post I made in another thread for more visibility:

    Quote Originally Posted by Marishi-Ten View Post
    I too am having this issue. It looks like the hop path getting from your ISP to the end point servers may have changed. Normally, I was routed through the Seattle, Chicago, and Montreal switches.

    Pulling the latest tracert though, I show now that I'm actually routing though an entirely different path (though it's the same company). I'm actually being routed to Seattle, California, then to New York, and finally Montreal. The route path has changed and looks to be my main driver and source of latency. This change over must have occurred during the one hour maintenance last night, or some time between 4AM and 10AM MST. I've provided the tracert and ping tests for verification below.

    Square; Are you going to open another trouble ticket with TATA or should I? Either way is fine.

    TATA Communications contact information:

    Phone - 18005671950
    Email - datacustomer.service@tatacommunications.com

    5 37 ms 38 ms 38 ms sea-brdr-02.inet.qwest.net [67.14.41.18]
    6 38 ms 36 ms 43 ms ix-1-0-0-0.tcore1.00S-Seattle.as6453.net [64.86.
    123.77]
    7 119 ms 119 ms 156 ms if-14-2.tcore1.PDI-PaloAlto.as6453.net [64.86.12
    3.22]
    8 182 ms 136 ms * if-1-2.tcore1.NYY-NewYork.as6453.net [66.198.127
    .6]
    9 * 150 ms 137 ms if-11-2.tcore2.NYY-NewYork.as6453.net [216.6.99.
    1]
    10 * * * Request timed out.
    11 137 ms 147 ms * if-0-2.tcore1.MTT-Montreal.as6453.net [216.6.115
    .89]
    12 138 ms 137 ms 138 ms if-5-2.tcore1.W6C-Montreal.as6453.net [64.86.31.
    6]
    13 143 ms 140 ms * 66.198.96.50
    14 * * 144 ms 192.34.76.2
    15 141 ms * 144 ms 199.91.189.234
    16 141 ms * 141 ms 199.91.189.43


    C:\WINDOWS\system32>ping 199.91.189.43

    Pinging 199.91.189.43 with 32 bytes of data:
    Reply from 199.91.189.43: bytes=32 time=142ms TTL=243
    Reply from 199.91.189.43: bytes=32 time=143ms TTL=243
    Request timed out.
    Reply from 199.91.189.43: bytes=32 time=145ms TTL=243

    Ping statistics for 199.91.189.43:
    Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
    Approximate round trip times in milli-seconds:
    Minimum = 142ms, Maximum = 145ms, Average = 143ms

    UPDATE 13:08MST 9/20/13:

    I was finally able to log back in after the NA/EU lobby timeouts. Once I was able to secure the connection, I immediately ran a tracert to see if any back end work was being performed. It looks like there was. Instead of being routed through the New York nodes (see above), I am back to the Chicago routes (I normally experience low, if any, latency through the Chicago hops).

    Square; TATA seems pretty inept at keeping stable and clear port routes. You may want to look into another rack provider if/when your service contract expires with them (just not rackspace for the love of God). Also, in the spirit of transparency, can one of your team members (preferably a Sr. Systems Admin) confirm that back end work is taking place and possibly provide the open ticket number logged with TATA?

    4 23 ms 23 ms 23 ms boid-agw1.inet.qwest.net [184.99.65.49]
    5 36 ms 36 ms 69 ms sea-brdr-02.inet.qwest.net [67.14.41.18]
    6 36 ms 36 ms 37 ms ix-1-0-0-0.tcore1.00S-Seattle.as6453.net [64.86.
    123.77]
    7 111 ms 110 ms 111 ms if-14-2.tcore1.PDI-PaloAlto.as6453.net [64.86.12
    3.22]
    8 111 ms 111 ms 111 ms if-2-2.tcore2.PDI-PaloAlto.as6453.net [66.198.12
    7.2]
    9 122 ms 110 ms 119 ms if-11-3.tcore2.CT8-Chicago.as6453.net [66.198.14
    4.58]
    10 112 ms 113 ms 114 ms if-3-2.tcore1.W6C-Montreal.as6453.net [66.198.96
    .45]
    11 115 ms 115 ms 115 ms 66.198.96.50
    12 117 ms 116 ms 116 ms 192.34.76.2
    13 115 ms 115 ms 115 ms 199.91.189.234
    14 117 ms 116 ms 115 ms 199.91.189.43

    Trace complete.

    If need be, I can provide historical trace routes, digs, pings, and whois over the past month or so.

    Love,
    Marishi Ten
    UPDATE 14:43MST 9/20/13:

    I was able to log in and join an instance server. While the instance was running, I was running concurrent trace routes and ping checks. I thought that there may be an off chance that once the client switches to an instance partition, that the route tables would change. Oddly, they did not. They remained the same for me. So, we know a few things:

    1.) It's not location based (I'm on the West Coast. East Coast users are being affected as well).
    2.) It's not host related (Square isn't changing the IP's when you jump into an instance).
    3.) It's not affecting all users (users who route though Chicago are fine).
    4.) It's not an ISP or local network related issue (Well, YOUR ISP or network that is).
    5) All users that are/were affected were switching on the New York node (Best guess. All tracert info I saw on other users all have one thing in common. They are dropping packets at an alarming rate on the NY hops).

    So, without having an SSH into their nodes and admin ruby/console command access, I can say the issue is narrowed down to the NY hop owned by TATA. If I was the Systems Admin, I would be focusing on that specific set of switches and checking for corrupted caches, port overloading, hardware issues, etc. I would also temporarily reroute ALL traffic away from my node to another until I figured out what the problem is and got it corrected.
    (5)
    Last edited by Marishi-Ten; 09-21-2013 at 05:57 AM.