- > Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
- "“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
- Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
[1] https://arxiv.org/abs/1506.02640
- I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
by mentalgear
4 subcomments
- Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
- Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
by tencentshill
3 subcomments
- "rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.
- Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
- This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.
by shaky-carrousel
1 subcomments
- He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
- >the system “functioned as intended,”
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
- I think the most amazing part is that the school doubled down on the mistake by parroting the corporate line.
I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”
- If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
- Memories of Jean Charles de Menezes come to mind: https://en.wikipedia.org/wiki/Killing_of_Jean_Charles_de_Men...
- What is happening in the world. There should be some liability for this but nothing will happen.
- > “They didn’t apologize. They just told me it was protocol. I was expecting at least somebody to talk to me about it.”
I wonder how effective an apology and explanation would have been? Just some respect.
by prmoustache
2 subcomments
- We blame AI here but what's up with law enforcment that comes with loaded guns in hand and send someone to the ground and cuff him before actually doing any check?
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
- The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
- I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
[0] https://www.youtube.com/watch?v=sIAnQwiCpRc
- An alert by one of these AI tools, which from what I understand have a terrible track record, should not be reasonable suspicion or probable cause to swarm a teenager with guns drawn. I wish more people in local communities would understand how much harm this type of surveillance and response causes. Our communities should not be using these tools.
- There are two basic ways AI can be used:
1. To enhance human productivity; or
2. To replace humans.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
by anal_reactor
2 subcomments
- The core of the issue is that many Americans do carry weapons which means that whatever the security system, it needs to keep in mind that the suspect might be armed and about to start shooting. This makes the police biased towards escalation because the only way against a shooter is to shoot first.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
- I don't have kids yet, but I may someday. I went to public school myself, and would prefer to send any kid of mine to public school as well. (I'm not hard against private schools, but I'd prefer my kid gets to make friends from all walks of life, not just people who have parents who can afford private school.)
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
- The regular types of school shootings weren't enough, so they invented AI-powered police school shootings to the mix.
- >> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
by jharrison11
0 subcomment
- Looks like per their website it did function as intended... It surfaces potential threats for the school to look at and make a human decision. The principal decided to send the police after the school safety team dismissed it as part of the correct process. I mean fire alarms go off for lots of things that are not fire alarms... This was an alert meant to be validated by a human that messed up.
Its pretty clearly documented how it works here:
https://www.omnilert.com/solutions/gun-detection-system
https://www.omnilert.com/solutions/ai-gun-detection
https://www.omnilert.com/solutions/professional-monitoring
by ignormies
1 subcomments
- > Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
"""
The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight.
"""
[1]: https://pluralistic.net/2024/10/30/a-neck-in-a-noose/
- Sincere, and snarky summary:
"Omnilert" .. "You Have 10 Seconds To Comply"
-now targeting Black children!
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
by crazygringo
3 subcomments
- It sounds like the police mistook it as well:
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
- Systems like this need to report confidence in their assertions.
e.g. Not "this student has a gun" but "this model says the student has a gun with a probability of 60%".
If an AI can't quantify it's degree of confidence, it shouldn't be used for this sort of thing.
by neverkn0wsb357
0 subcomment
- It’s unsurprising, since this kind of classification is only as good as the training data.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
- Everything around us: political tumult and weaponization of the justice system, ICE and other capricious projections of federal authority, the failure of drug prohibition, and on and on and on, points to a very simple solution:
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
- How likely is it that the AI system would have classified the bag of Doritos as a weapon had the person carrying it been white instead of black?
by hsbauauvhabzb
1 subcomments
- I would be certainly curious to test ethnicity with this system. Will white students with a bag of Doritos be flagged, or only if they’re black?
- Wait… AI hallucinated and the police overreacted to a black kid who actually posed no threat?
I thought those two things were impossible?
- > “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
"Sorry, that's Nacho gun"
- "I am invoking my 4th and 5th amendment rights afforded to me by the Constitution of the United States of America. I have no further comment until I have consulted with and am in the presence of my legal council."
Then, just sit back and enjoy as the lawsuit unfolds.
- Armed and dangerous until proven chips.
- When people wonder how can AI mistake a bag of snacks as a weapon, simply answer "42"
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
by aussieguy1234
0 subcomment
- Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.
I hope they sue the police department over this.
- Sad for the student.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
by ratelimitsteve
0 subcomment
- the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
- Wouldn’t have thought AI assessment of security image is enough for probable cause
by 1970-01-01
0 subcomment
- Very ripe for a lawsuit. I would expect lawyers to be calling daily.
- At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
by iamleppert
0 subcomment
- If I was that kid I'd be suing the school, the AI company, the police, anyone and everyone who had to be subjected to the mistake.
by idontwantthis
3 subcomments
- If these AI video based gun detectors are not a massive fraud I will eat one.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like?
What does a man in a bulky sweatshirt with a pistol on his back walk like?
What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
- I can understand the outrage in this thread but literally none of what you are all calling for will be done. No one from justice or law reads HN to see what should be done. I wish folks here would keep a cooler head rather than posting lengthy rants and vents that call for punishing school staff. Really unprofessional and immature from a community that prides itself, to fall constantly into a cycle of vitriol.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
- The only way we could have foreseen this was immediately.
by programjames
0 subcomment
- The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
by lawiejtrlj
0 subcomment
- This is Brazil-level insanity, but half the people on this forum hope to make money off it so it's fine
- This is only the beginning of AI-hallucinated policing. Not a good start, and I don't think it's going to end well for citizens.
- * AI mistakenly reports the student as a threat
* the student was black
Is that really a coincidence?
It's just a matter of time before this or something worse happens.
by hshdhdhehd
0 subcomment
- The solution is easy. Gun control. We dont feel the need to have AI surveillance on people to detect guns in ROTW.
- Was this really an AI gun detection system, or just a machine that goes off randomly?
- In 1987, Paul Verhoeven predicted exactly this in the original Robocop.
ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.
The article even included an homage to:
“Dick, I’m very disappointed in you.”
“It’s just a small glitch.”
by SanjayMehta
0 subcomment
- Robocop.
Edit: And racism. Just watched the video.
by ninalanyon
0 subcomment
- Durely a human being should review the evidence before going off half cocked.
- The "AI mistake" part is a red herring.
The real question is: Would this have happened in an upper/middle class school.
The student has dark skin. And is attending a school in a crime ridden neighborhood.
Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?
The AI failure is masking the real problem - bad police behavior.
by anothernewdude
0 subcomment
- America does American things.
- And so begins the ending of the "unfinished fable of the sparrows"
by sans_souse
0 subcomment
- I thought on first glance the source was from doritos.com
That would have been bold
by burnt-resistor
0 subcomment
- It wasn't sour cream and onion and didn't contain cash, so it's super sus.
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
by nullbyte808
0 subcomment
- I would get my GED at that point. Screw that school.
- With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
- T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
- AI is a false (political) wish, it can and never work, it is the desperation of an over extended power structure
to hold on and permanently consolodate controll of all of the worlds population, and nothing else.
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in"
for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
by thescriptkiddie
0 subcomment
- we need personal liability for the owners of companies that make things like this
- Can someone write the novel
“Computer says die”
by johnnyApplePRNG
0 subcomment
- Who knew eating Doritos could make you a millionaire?
I hope this kid gets what he deserves.
What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.
by blindriver
1 subcomments
- How is this not slander? I would absolutely sue the fuck out of this system where it puts people's lives in danger.
by twoquestions
0 subcomment
- Before I clicked the article, I said to myself "The victim's gotta be Black", and lo and behold.
AI has inherited police's (shitty, racist, and dangerous) idea that any Black person is a dangerous monster for whom anything is a weapon.
- Hallucinate much?
- This is what we get instead of reasonable gun control laws.
by nickdothutton
1 subcomments
- "Omnilert Gun Detect delivers instant gun detection, near-zero false positives".
- I was unduly surprised and disappointed when I saw the photo of the kid and he turned out to be black. I would love to believe that this had no impact on how the whole thing played out, but I don't.
by pickleglitch
0 subcomment
- Sounds like this high school is doing a great job preparing students for the real world, where they can be swarmed by jackbooted thugs at any moment for any reason.
by bobbyprograms
0 subcomment
- All right they’ve gotta have a plain clothes bro go up there make sure the kid is chill. You know the difference between a murder and not can be as little as somebody being nice
- its not gun detection
that ai is racist just like its white creators
- > “false positive” but claimed the system “functioned as intended,”
Fuck you.
- [dead]
- >Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
by slipperybeluga
0 subcomment
- [dead]
by 6stringmerc
2 subcomments
- Feed the same system an image of an Asian kid and it will think the bag of chips is a calculator /s
Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...
by einrealist
0 subcomment
- Let's hope that, thanks to AI, the young man will now have a healthier diet! /s
- Poor kid, and what an incompetent police department not to use their own judgement ……
But ……
Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that
And of course pay the kid, so something positive came come out of the experience for him