Results 1 to 10 of 23

Hybrid View

  1. #1
    Player
    Valmar's Avatar
    Join Date
    Sep 2013
    Location
    Ul'dah
    Posts
    446
    Character
    Valmar Atheron
    World
    Leviathan
    Main Class
    Paladin Lv 70
    Quote Originally Posted by giantslayer View Post
    Abuse is definitely a concern, and there are safeguards that should be in place to address that:

    1. Make it take a decent amount of reports to auto lock someone and don't tell players how many it takes. If it takes 30, then a few angry jerks can't freeze someone, but it would still be easy for real spammers to get locked. They send hundreds, if not thousands, of spams a minute, and with reporting being easy, you can get hundreds of reports in a matter of seconds.

    2. The reports should contain the text of the reported message and who submitted the report, so a Square rep could review and tell if it was a false charge. The Square rep could then take disciplinary action against the people making false reports. For every case like that warranting manual review, there will be thousands of legit spammers reported.

    IRL, I work in banking. With millions of transactions posting on people's accounts every day, virtually none of those transactions get manually reviewed unless the account holder initiates a dispute. Same principal here. When you are dealing with high accuracy and high volumes (we will be talking hundreds of thousands to millions of RMT reports if they implement an easy method), it is important to make human review the exception, which can be initiated upon request by the customer.

    Also keep in mind that each report submitted has already been reviewed by a human player. If 30 people report someone for RMT, the only way they are wrong is if it is coordinated malicious player behavior, and Square can step in to review after a frozen player makes that claim, rather than sifting through thousands of reports to find a handful of bad ones.

    On a side note, it would actually be good to have the link to report stuff like harassment through a similar system so that it reports their exact text, not just what the person reporting types up. Harassment would still have a manual review, rather than the freeze at x number, because those are a very different type of situation (and also much much less common).

    The only problem I see with this is if someone Can abuse it, someone Will abuse it.

    Example A.) Free Company A has 400 Members, 150 Members online at all times. David the Drg, From FC A, is farming Fleece on a friday night. His camp is being taken over by Whitney the Whm. David says "Go farm else where Whitney, I was here first!" "I have every right to be here as you do!!"-replies Whitney.

    Seconds later David asks his FC, in Mumble, to Report Whitney the Whm for Botting; "Shes out claiming me at my camping spot."-says David. "Lets Freeze her account!"

    Today is Friday, SE support isn't available on evenings or weekends.

    And just as Whitney has out claimed David for the 20th time, spamming holy killing the sheep, she gets kicked from the game, as the 58th Member of David's FC completes his Report Submission.

    Whitney now cannot play all weekend, because David wanted to act like a child, and had the Fc members online to force an Account Freeze.


    Alot of things will have to be put in place as to not allow legitimate players from being insta Frozen by people for the wrong reasons.

    While I do Believe the RMT spam is a problem, I do not think an algorithm with the ability to insta freeze Accounts on its own is a good idea.
    (0)

  2. 07-05-2014 12:37 PM

  3. #3
    Player
    giantslayer's Avatar
    Join Date
    Jun 2014
    Posts
    470
    Character
    Colette Pascal
    World
    Brynhildr
    Main Class
    Arcanist Lv 70
    Quote Originally Posted by Valmar View Post
    The only problem I see with this is if someone Can abuse it, someone Will abuse it.

    Example A.) Free Company A has 400 Members, 150 Members online at all times. David the Drg, From FC A, is farming Fleece on a friday night. His camp is being taken over by Whitney the Whm. David says "Go farm else where Whitney, I was here first!" "I have every right to be here as you do!!"-replies Whitney.

    Seconds later David asks his FC, in Mumble, to Report Whitney the Whm for Botting; "Shes out claiming me at my camping spot."-says David. "Lets Freeze her account!"

    Today is Friday, SE support isn't available on evenings or weekends.

    And just as Whitney has out claimed David for the 20th time, spamming holy killing the sheep, she gets kicked from the game, as the 58th Member of David's FC completes his Report Submission.

    Whitney now cannot play all weekend, because David wanted to act like a child, and had the Fc members online to force an Account Freeze.


    Alot of things will have to be put in place as to not allow legitimate players from being insta Frozen by people for the wrong reasons.

    While I do Believe the RMT spam is a problem, I do not think an algorithm with the ability to insta freeze Accounts on its own is a good idea.
    I'm glad you brought up this question. Hopefully, the developers will read this and this will help them.

    The system I am proposing is reporting through the chat log. If David's FC mates don't have any messages from Holly on their chat log, then they can't report RMT. This means they probably couldn't even do this in the first place.

    If they somehow figured out a way to do so (like all gathering in that same spot, or sending tells to Holly so that they could flag her replies), then Holly would be frozen out of her account over the weekend. On Monday, she calls Square. The rep reviews the reports, sees they are not RMT, reinstates Holly's account, and then issues either bans or warnings to David and his FC mates who participated. In that scenario, David and his FC have basically engaged in player harassment, so Square would have grounds to ban them and a reasonable amount of evidence to support it.

    If implemented properly, this system is actually very robust and intelligent (compared to "dumb" systems like filters).
    (1)