Originally posted by maladat:
quote:
Originally posted by sigfreund:
Is there way to calculate the probability that after X number of rounds without a failure of some sort that the next shot will not result in a failure? I.e., if I fire 200 rounds without a failure, what is the probability that shot 201 will not result in a failure? And keep in mind that I’m not asking about sampling a large number of firearms as if they were light bulbs and I was trying to determine what their average life was. I’m referring to a single firearm and a specific number of shots fired. To be more specific, I have fired my 357/40 P320 exactly 2653 times without a malfunction. What is the probability that if I must defend myself with it tonight that I will have a malfunction with the first shot I fire? (The answer is obviously not 1 ÷ 2653 or 0.000377, because if it were that simple, the probability that there would be a malfunction after two shots were fired would be 1 ÷ 2, or 50%—or sort of like having a 50/50 chance of winning the lottery: I’ll either win or I won’t.)
I'll take a shot at this question. I'm not a statistician, but I am a computer scientist that has had to study a whole bunch of probability theory (which is the mathematical basis of statistics).
Note that all probabilities used here are given from 0 to 1 rather than from 0% to 100% (with 0 equivalent to 0%, and 1 equivalent to 100%) unless specifically noted. First, it seems obvious that the per-shot failure probability of a given firearm is probably not constant - things like how clean the firearm is, parts breaking in, springs wearing out, etc., might affect it.
Given a large population of firearms and detailed data, we could model this, but we're talking about single firearms here, so on this front, we have to just give up and say we're going to keep the firearm well maintained and assume the per-shot failure rate is a constant. Let's call it "F."
Second, given a number of shots taken, "N," with no failures, we don't have good information about how frequently failures occur, so we can't estimate F directly.
What we CAN do, is estimate an upper bound value for F - an estimate we are pretty sure is larger than the real value for F.
Specifically, we are looking for a
confidence interval for the value of F. What that means is that, given our observation (N shots fired without a failure), we believe with some probability "C" that F is less than some specific value.
There is an a fairly involved proof of the following that I won't reproduce here. If you want to read about it, look up the confidence interval of a binomial distribution with no successes.
It turns out that in cases like this, given 0 failures in N trials, if you want confidence C that F is less than some value F0, F0 is just the value of F that gives you a probability (1-C) of getting 0 failures in N trials (in general, this is not necessarily the case).
So, say, if you want to know with 95% confidence that F is less than some specific value F0, then F0 is the value of F which gives you a 5% chance of getting 0 failures. The reason the probabilities flip like that is essentially because you're saying "I might have gotten really lucky this time, so the failure probability might be worse than it looks." The more confident you want to be about your estimate of the worst possible failure probability, the more lucky you have to be worried about having gotten in your test.
The probability C of getting 0 failures in N trials is trivial to calculate:
(1-F)^N = 1-C
Now we can rearrange:
(1-F) = (1-C)^(1/N)
F = 1 - (1-C)^(1/N)
So, let's say we shot 200 rounds without a failure, and we want to be 95% sure about our estimate of the gun's failure probability.
F = 1 - (1-0.95)^(1/200) = 0.0149, or 1.49%.
So, basically, shooting 200 rounds without a failure lets us be 95% sure that the gun's per-shot failure rate is less than 1.5%, but beyond that, we can't really say anything. It could be 1 in 100 (1%) or 1 in 10,000 (0.01%) and we have no idea.
There's an approximation you can use that reduces this formula to:
F ~ -ln(1-C)/N.
Using the same numbers from above, we get:
F ~ -ln(1-.95)/200 = 2.996/200 = 0.015, or 1.5%.
This leads to the so-called "rule of three" - for a 95% confidence interval, you can just use F = 3/N. There's an article on Wikipedia about the rule of three.
https://en.wikipedia.org/wiki/...of_three_(statistics)
You can generalize the "rule of three" to other confidence values. For 99% confidence, -ln(1-.99) = 4.61, so you can use F = 4.61/N. For 99.9% confidence, -ln(1-.999) = 6.9, so you can use F = 6.9/N. And so on.
So with 99.9% confidence, shooting 200 rounds without a failure means your per-shot failure rate is no worse than 6.9/200 = 0.0345, or 3.45%.
Being 99.9% sure instead of 95% sure makes the maximum failure rate worse because with 95% confidence, you're only worried that you might have gotten really lucky when you shot 200 rounds without a failure. To be 99.9% sure, you have to be worried that you might have gotten really, really, REALLY lucky when you shot 200 rounds without a failure.
This sort of analysis is pretty common, e.g., in medical studies - if they test a new drug on 1000 people and have no serious adverse reactions, they still want to be pretty confident about the maximum rate of serious adverse reactions.