Quote Originally Posted by Kes13a View Post
why is it every time someone expounds on the potential of false flagging I see someone likely going to be banned for botting?
Sometimes, that certainly might be the reason. Other times, it might just be a software engineer (or systems engineer) who's dealt far too often with incorrectly-flagged data due to criteria that don't actually properly define the bounds of the scenario a system is trying to flag.

Which is, among other things, a problem I am dealing with on a project at work right now.

(Please send help; I am in an AWS-hosted version of hell.)

I actually wouldn't be altogether surprised if they've actually done studies of this behind the scenes, though; it's not uncommon to test such things by setting up the system and have it 'dry run' rather than actually acting on results, and then see what the results were and try to hand-check them. And if the numbers were too skewed towards false positives—whatever 'too skewed' is by Square-Enix's reckoning, be that 10%, 1%, or 0.1%—most folks wouldn't push a system live.

Mind you, I wouldn't be surprised if they haven't done any such tests, either; there's also a tendency in my field towards overengineering solutions sometimes, which can lead to a sort of paralysis where you spend so much time trying to plug potential flaws in a system design that you never actually settle on a final design and make the system.

They could even just have decided the entire thing is a mess and not worth dealing with because it will effectively get into an escalating state of open warfare a'la email and text-message spammers, where the botters are constantly finding ways around the system and so the system needs to be redesigned every so often. Which would definitely be disappointing, but "this will cost too much to keep constantly updating a solution, so just ignore the problem and hope it goes away" is also an approach I've seen a non-zero number of times.

A depressingly non-zero number, at that.