I used the word chances as the percentage actually says 41% CHANCES of success ..never said it was a guaranteed success....still have some problems in reading me think...
Mei
Printable View
I used the word chances as the percentage actually says 41% CHANCES of success ..never said it was a guaranteed success....still have some problems in reading me think...
Mei
It's only going to be on average 41 out of a 100 as well. Small sample sizes leave plenty of room for outliers of streaks to complicate the picture.
Which brings me to another point no one has mentioned. Not a single person here has properly recorded their successes and failures at each % of success.
For example. at 99% success I've done 200 synths. My data shows I have a 95% success rate from my 200 synths so far. Even this is too small of a sample size to be statistically significant.
This is the only kind of data that matters. Not anecdotal evidence for a couple of synths. That means literally nothing to statistics.
Proving the RNG is broken is a statistical problem. You must, through statistics, prove there is an imbalance. I won't believe a single complaint until people start recording this data to show if a true bais exists or not. Otherwise it's all subjective and anecdotal.
No, i'm saying you don't actually understand how it works.
When something has a percentage chance of happening, you should always expect that the outcome will be random. You should expect that if you have, say a 75% chance to succeed, that you "should" get mostly successes, not that if it doesn't happen, something is wrong.
Well, not exactly. Most social sciences hold that 30 is the generally-accepted minimum number for a sample size, but also that as you increase sample size beyond that point, statistical significance increases in tandem. 200 is a perfectly valid sample size from which to draw conclusions.
The law of probability.
As you increase your trial number, the chances of you getting it should become higher. Go flip a coin and see what is the probability of you failing to flip a head 10 times in a roll. (hint -> 1/2 ^ 10 = 0.00097 = almost zero)
41% can be many things, but the end result is, IF you roll X number of times, the success rate should be 41%.
So ...
roll (fail 0.0x) 1x = 0%
roll (fail 0.0x) 2x = 0% (next roll should be a success)
roll (success 33%) 3x = 33%
Why?
The condition is total chances, if you did a trial of 1000, you should have 410 success. If you did a trial of 2, you MIGHT have 1 success, because your 50% is outside of the 41%, so you still have 8% failure rate. But if you did a trial of 3, you should have at least 1 success since you went from 100% -> 50% -> 33%.
Therefore, out of 14 rolls, the person should have at least 5.74 successes, which means he should have at least 5 successes, and possible 6th.
Do not believe I am right? Go ahead, go roll a dice (17%) and see how far you can go w/o getting a 1 or 2 (34%).
However, this isn't a social science. This is statistics pure and simple. http://en.wikipedia.org/wiki/Sampling_(statistics) As this wiki article's sheer length illustrates it's not a simple as you seem to make it, and 30 is far from a realistic answer in statistics.
All that needs to be shown is that given successive samples the % success rate converges on the listed rate or fails to do so, and to show this data with a significant sample size for statistical significance. This is something not a single person here has ever even attempted. Anecdotes mean nothing. If you want square to look at their algorithm more closely then prove it's broken. Prove it with cold hard numbers.
Also, many people seem to forget how easy it is for square to prove it works. They have the generator. It's entirely likely, and probable, that the algorithm is surrounded by a unit test. It generates potentially thousands of random numbers within a range in just a second. Then a very simple statistical analysis is done to show the numbers generated are evenly distributed along the range within tolerances. It's so incredibly simple to prove that it's not surprise they have confidence their algorithm works correctly.
Entirely false.
The First Law of Probability states that the results of one chance event have no effect on the results of subsequent chance events.
Second Law of Probability, which states that the probability of independent chance events occurring together is the product of the probabilities of the separate events.
This entirely disproves what you said.
Do you mean the law of large numbers? Because expected avereage is only really predictable after a very large sample size depending on the factors at play.
14 attempts is not enough to set a worthy expectation, and doing so is a gamblers fallacy. It won't balance out in the (relatively) short term. Even for something 50/50 like a coin flip.
http://www.j-bradford-delong.net/mov.../cltheorem.gif
OP, back to mathematics class for you.
ITT : people thinking that a sample of 14 (ok, 42) is sufficient to say that a RNG choosing values from pools with more than thousands values available is broken.
Also, people crying that because it says "41%" you'll actually see these 41% with 100 draws (more or less). Just make the test. Play heads or tails. Throw it 14 times and see how you failed to get 7 heads. Throw it 100 times and witness how you failed to get 50 (even 40 is unlikely with such a small sample).
You do have 41%. But you won't see them unless you do 1000+ attempts. Under 50 attempts, you could have 10% or 90% success, when the real probability remains at 41%
If you take a simple example :
let's say that the PRNG only uses 1000 values.
you have 410 success and 590 loss.
the probability to draw only losses is really high at that level for such a small sample like 14 draws
because 41% doesn't mean you're trying from a pool of 100. In fact, I wouldn't be surprised if the pool is like 10k+ large. Tell me then how unlikely it is to get 14 (even 42) losses in a row ? your value is deeply drowned in the error margin.
It doesn't matter - all empirical sciences use the same statistical models and guidelines, which is part of the reason why mathematics are considered a "universal language." I'm not sure what a wikipedia article's length has to do with anything (except that it's a form of the Argument by Verbosity fallacy).
30 is fine. 200 is better. 10000 is awesome.
I would like each failure to increase the chance of success on the next attempt until ultimately you WILL get it done.
Roulette is the best. I like to play it at times, or just watch other people play it. "Oh. It's been red three times in a row now, I better bet on black." It's funny that they even display the history of what has been rolled. They do it to trick people into betting more on certain spaces. "17 has been hit three times out of the last 10. I should bet on 17!" No.. It's still random, and the odds of hitting 17 aren't any higher. Previous rolls don't affect the outcome of this roll.
It's the same thing with the RNG in this game. You could win betting on red 10 times in a row, but the minute you lose 6 times in a row betting on red, you cry about it, or claim that it is rigged.
It proves that it's not as simply as just saying "30" as you seem want to do. It's clearly a topic in which there is much discussion, many opinions, and several methods to arrive at a useful number and just stating one number to rule them all is not an adequate answer to the sample size question. It wasn't an argument by verbosity fallacy at all since you missed the intended meaning. Just because I don't feel like breaking down each and every bullet point on that page doesn't mean what it contains was irrelevant.
When dealing with percentages in a range this wide 30 will not be significant enough. It will have nearly zero tolerance for streaks of any kind. That's not a statistic worthy of any significance.
It does matter. Social sciences are about human behavior. Humans do not act in a completely random manner. They are affected by culture, environment, upbringing, etc. The threshold for what is considered an appropriate sample size can be lower. It depends on what you're measuring.
When talking about purely random numbers, a sample size of 30 is negligible. 200 is tiny. 1000 is still on the small side. 10000 is decent.
Heh, it's discussions like this that make me dread the day when it's time for me to start over melding my 3 star gear. Great discussion everybody!
Just do it. Either you get them in the slots, great. You don't get them in, great also. No big deal.
Atma. I blame Atma.
The OP should feel lucky! With a 41% chance of success on each attempt, the probability that a series of 14 independent attempts would all be failures is quite small! OP I deem you luckiest person in this thread!
ff14's rng isn't like dota's pseudo rng, where the chance increases the more you fail. doesn't matter how many times you didn't succeed, you'd still have the same chance everytime.
I'm sorry, I'm starting to get a little tired and frustrated trying to re-iterate the point that seems to be repeatedly flying over your head, so I'm just going to be blunt. You don't know what you're talking about.
For the last time: 30 is a perfectly valid starting point to determine statistical significance. But as that number increases, so does the certainty in which you can declare statistical significance - so whenever possible, try to use as large a sample size as possible.
Here are a few actual sources (i.e., not wiki articles) for you to learn more about sample sizes & statistical power:
1) Student (1908a), “The Probable Error of a Mean,” Biometrika, 6, 1–25.
(1908b), “Probable Error of a Correlation Coefficient,” Biometrika, 6, 302–310.
2) http://www.stat.ufl.edu/~aa/articles...n_binomial.pdf
Also, the vast majority of that wiki article you linked was TOTALLY IRRELEVANT to the discussion, because it talks about so many facets of probability and sampling. Thus, it was a proof by verbosity.
If you two have any actual evidence (preferably academic in nature) that 30 is now considered "insignificant" or "negligible", rather than just relying on proof by assertion, I'd be happy to take a look at it.
Let me make it easier for you to understand. If you flip a coin 30 times, the odds of you obtaining an even split of results 15 heads, 15 tails is miniscule. But you are saying that you can start to infer significance from results you get from a mere 30 flips, that you could say well, I've had 18 heads and 12 tails, this is evidence that heads are more likely than tails. That you would be expecting, by 30 for the results to be starting to conform to the odds. This conformity simply isn't going to start to appear until the late double digits at the earliest for the vast majority of samples.
And I don't know where you got your statistics training, but if I'd taken a sample size of 30 to my tutor for any of my experiments and started to infer any kind of pattern from the results at all I would've been laughed out of his study and referred back to my undergrad statistics lecturers.
Edit: Thinking about it, your mention of the 'social sciences' would lead me to guess you do mostly qualitative research? Well I got news for ya bud, that isn't science. There's a reason you get a BA not a BSc.
The realists laws of probability:
1, You get what you get.
2, Don't like what you got, see law number one
Actually I'm not so sure the RNG is not bugged though, I've had more than a few occasions where I would be farming the daily maps with 97% success rate and if I fail the first time each successive swing will guarantee a failure unless I use an ability to up the rate to 100% success. I've also had similar statistics from multiple melding failures.
It almost seems like the RNG doesn't actually randomize beyond the first attempt, so if you have a 90% chance at anything if you succeed/fail you'll be predisposed to the same result unless you force the RNG to change by either changing the values of the success/HQ/Gathering/whatever rate or change the scenario such as changing jobs, zoning, and such.
On the other hand it might just be bugged and stuck on a predetermined value of success or fail after the first attempt and SE has thus far just hard pressed to find it when it occurs.
But then again, if you raise your chance to 100%, then how can you know that hit wouldn't have succeeded if you didn't? I don't up my chance to mine it on maps to 100% unless I am down to the last attempt at gathering, I have failed attempts several times, but really, I have only needed to use an ability to get 100% once, and that's simply because if I only have one chance then obviously I am going to remove any risk of failure.
When it comes down to it, you're much more often getting a less notable, more in line with the success rate result, than you are getting a lopsided one. The only difference is that people have a cognitive bias to view certain patterns as more noteworthy than others, while that is when it comes down to it just that, a cognitive bias.
For example if you flip a coin five times you are just as likely to get heads -> heads -> heads -> heads -> heads as you are on getting heads -> tails -> tails -> heads -> tails, for example. You'd just take more note of when you get five in a row simply because your mind considered it more notable, then the other pattern slips to the back of your mind as time passes on, resulting in the belief that you see the notable pattern more often than you actually see it. Yes, you are more likely to not get heads 5 times in a row than you are to get heads 5 times in a row. No you are not less likely to get heads 5 time in a row, than you are to get any other specific sequence of results.
The stuff like this in crafting and gathering basically just comes down to: You're forgetting how many more times you got the more probable sequence, because when it happens you have no bias to consider it noteworthy, and as such just move on with your day. If people want to prove the RNG is truly broken, then rather than just off hand saying "I get this more than I should", they have to do testing, recording every single attempt over a large sample size.
And as some people have mentioned: Mathematical statistics require much larger sample sizes than social sciences. Ideally social sciences would have larger sample sizes as well, but due to there being more determined elements and it not being random, and it just not being feasible to get a huge sample size when you need individual people for each test, we've kind of compromised to accept smaller ones.
Thank you for the references. Now I see where you get the number 30 from. It's essentially the minimum number of samples required for calculating a confidence interval. Below 30, the error calculation isn't very accurate.
Say the real probability is 41%. Let's take a sample size of 30. From 30 independent tests, let's say I get 10 successes (exactly 41% of 30 is 12.3, so 10 seems reasonable).
Formula (1) is:
p' +/- z * squareroot( p' (1- p') / n )
where:
p' is the observed probability (10/30)
z (or z alpha /2) is the (1 - alpha/2) percentile of a standard normal distribution
n is the sample size (30)
For a 95% confidence interval, z is 1.96.
This gives us an interval of 0.3333 +/- 0.1687. What does that mean? It means, "I am 95% confident that the real probability is between 16.5% and 50.2%". And it just so happens that 41% does fall within that range.
However, this interval is pretty large, so you can't really say "Yes, the percentage really is 41%". If you want a more accurate measurement, then you need to reduce the interval size. To do that, increase the sample size n. If you increase n to 10000, the range is +/- 0.92%, which is much better.
Right. By the time the sample size increases to about 30, any results using a t-distribution will coincide with the results from a standard, normal distribution (with minimal error).
I agree. You'll always get more accurate numbers (and thus, be able to draw a conclusion with greater certainty) with a greater sample size.
I edited above. It wasn't +/- 4.6% but +/- 0.46%.
Edit: I was wrong again. (z alpha / 2) is 1.96, not 0.98, which means that my intervals were even larger (+/- 0.92% for n = 10000).
Additional source for a minimum sample size of 30 being acceptable
http://sphweb.bumc.bu.edu/otlt/MPH-M...ability11.html
To be fair, here's another source stating the sample size of 30 is inane and 30 is not any form of a magic number, and that determining sample size is far more complex than that.
http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf
And here's a source on how to handle your statistical analysis when the sample size is greater than or less than 30
http://www.amstat.org/publications/jse/v4n3/rhiel.html
That's how it is in every game. Someone comes on saying RNGesus hates them and gave them too many failed runs/slots/whatever in a row. Then another person comes along and says RNGesus loves them and gave them many much blessings. The devs look at it and go "Yep, working as intended" and nothing changes lol.
It's RNG. That's just how it goes :-\
the trouble it's in many game they applicate RNG to the basic without put some safeguard to avoid long streak of "bad luck". it's frustrating for the player, and frustrate a player is never good.
indeed RNG is RNG, but let's admit that some safeguard for avoid stuff like 14 fail in row with a 41% of succes is needed. it's like play coin toss and get 14 time Face in row...