I am reposting an edited version of an April 2017 post that was either ignored and or regarded as irrelevant at the time. However, the Patriot One team, when I forwarded my query to them, replied that I was essentially correct. Their methadology follows well established statistical techniques. My old post was purely illustrative but focussed on the importance of prevalence in a sample population.
Testing in a high prevalence zone speeds up the reduction of false positives so a casino or nightclub area are fertile grounds for training the software. A middle school not so much.
Old post (lightly edited):
My viewpoint is that of the security company that wishes to evaluate the technology.
The utility of a weapon detection system should be evaluated in a way that is similar to that of a diagnostic test for a disease.
Here is an example of a medical diagnosis spelled out in statistical tems. This balance of probabilities is often purely intuitive when made by a skilled experienced diagnostician without cognitive bias.. Sensitivity and specificity are properties of a medical test or medical imaging equipment. These may improve with technological improvements but that does not blunt the argument presented here.
Clinicians are chiefly concerned with the predictive value of the test, rather than its sensitivity in arriving at a diagnosis. The clinical question is: is a patient with a positive test result likely to actually have the disease?
A crucial point is that prevalence affects the predictive value of any test. This means that the same diagnostic test will have a different predictive accuracy according to the clinical setting in which you are applying it.
The following table illustrates this phenomenon.
Let's say sensitivity and specificity are 99% and 95% (this is a REALLY good test…)
As prevalence rises from 1% (e.g., heart disease among 30 year-olds) to 20% (e.g., heart disease among 70 year-olds), Positive Predictive Value (PPV) will rise from 17% to 83%: a huge difference in the clinical interpretation of the same test result.
You can just look at the first and last rows: (The middle rows of the table show how this result is calculated]).
The Impact on Positive Predictive Value (PPV) as Prevalence Changes,
for a test with 99% Sensitivity and 95% Specificity
| Prevalence | 1% | 10% | 20% |
a | # in population | 1,000 | 1,000 | 1,000 |
b | Diseased | 10 | 100 | 200 |
c | Not diseased | 990 | 900 | 800 |
d | True Positives on the test (b x 0.99) | 10 | 99 | 198 |
e | False positives on the test (c x (1-0.95)) | 50 | 45 | 40 |
f | Total # positive on test (d + e) | 60 | 144 | 238 |
| PPV (d / f) | 17% | 69% | 83% |
(Source: Dr. Chan Shah: Public health and preventive medicine in Canada. Elsevier, Canada, 2003)
So, your interpretation of any test result depends not only on the sensitivity and specificity of a test, but also on the baseline prevalence of the disorder in the population you are working with. Unless specificity is perfect (100%), falling prevalence leads to increasing false positive results. So, you need to be roughly aware of the prevalence in the population you are diagnosing and treating.
A rare lone gumman (perhaps in a middle school) is very different from a gunman in a nightclub scene where there is a much higher prevalence of concealed weapons. The purchaser of security equipment will want to know the number of false negative and false positives which is determined by the likelihood of weapons in the population scanned.
All I am trying to say is that we need to know how PAT conducts these real world tests.
A false negative (a weapon got through that was not detected) can be damaging to the reputation of PAT.
A high false positive rate that will require security staff present to do lots of pat downs will discourage it's use. (end of old post)
Nonetheless the radar signatures obtained from a high prevalence area should be the same as those detected from a lone gunman approaching a school (conjecture!) so that the introduction of the Patriot One system would proceed quickly therafter after this time of optimization being done in Las Vegas.
This technology is a quantum leap from what we have today though I expect my comments will be distorted positively and negatively for people's own opaque reasons. Perfection vesus good enough much like many algorithms that guide our life.
I have little doubt that the various security companies evaluating this technology are "deep in the weeds" regarding these statistics. An announcement of agreement with a Tier 1 security company will be proof that this is as good a technology as I have come to believe over the last 16 months.
Most readers already appreciate these points but I thought it might be useful to add some context. Should be an interesting year.