In particular LLMs seem particularly good at passing the initial smell test, which I'd imagine is first line of defense for most on determining whether to trust info. And unless it is something critical most people probably wouldn't deem looking at sources worth while.
Lately I've been running many queries against multiple LLMs. Not as good as organic thinking but comparing two does at least involve a bit of judgement as to which set of info is superior. Probably not the most eco friendly solution....
Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.
In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.
> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart
I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.
Now that we have thinking models and methodology to train them, surely before long it will be possible to have a model that is very good at the kind of thinking that an expert OSINT analyst knows how to do.
There are so many low hanging fruit applications of existing LLM strengths that have simply not been added to the training yet, but will be at some point.
What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.
I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.
Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.
L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.
Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.
The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.
DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.
[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...
[2] https://apnews.com/article/us-intelligence-services-ai-model...
• Instead of forming hypotheses, users asked the AI for ideas.
• Instead of validating sources, they assumed the AI had already done so.
• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.
This isn’t hypothetical. This is happening now, in real-world workflows.
"""
Amen, and OSINT is hardly unique in this respect.
And implicitly related, philosophically:
> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”
> It spits out a convincing response: “Paris, near Place de la République.” ...
> But a trained eye would notice the signage is Belgian. The license plates are off.
> The architecture doesn’t match. You trusted the AI and missed the location by a country.
Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."
How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...
At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.
EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...
For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.
2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.
3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.
4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.
5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.
6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.
7. Feel Free to ask me more.
This seems contradictory to me. I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature. If they didn't research the tool and its limitations, that's lazy. At some point, they stopped believing in this limitation and offloaded more of their thinking to it. Why did they stop? I can't think of a single reason other than being lazy. I don't accept the premise that it's because the tool responded quickly, confidently, and clearly. It did that the first 100 times they used it when they were probably still skeptical.
Am I missing something?
It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.
To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.
A director of NSA, pre 9/11, once remarked that the entire organization produced about two pieces of actionable intelligence a day, and about one item a week that reached the President. An internal study from that era began "The U.S. Government collects too much information".
But that was from the Cold War era, when the intelligence community was struggling to find out basic things such as how many tank brigades the USSR had. After 9/11, the intel community had to try to figure out what little terrorist units with tens of people were up to. That required trolling through far too much irrelevant information.
[0] https://samrawal.substack.com/p/the-human-ai-reasoning-shunt
Besides "OSINT" has been busy posting scareware for years, even before "AI".
There's so much spam that you can't figure out what the real security issues are. Every other "security article" is about "an attacker" that "could" obtain access if you were sitting at your keyboard and they were holding a gun to your head.
Mere observation of others has shown me the decadence that results from even allowing such "tools" into my life at all.
(who or what is the tool being used?)
I have seen zero positive effects from the cynical application of such tools in any aspect of life. The narrative that we "all use them" is false.
But all the examples feel like people are being really lazy, e.g.
> Paste the image into the AI tool, read the suggested location, and move on.
> Ask Gemini, “Who runs this domain?” and accept the top-line answer.
class ParsedAddress(BaseModel):
street: str | None
postcode: str | None
city: str | None
province: str | None
country_iso2: str | None
Response:{
"street": "Boulevard",
"postcode": 12345,
"city": "Cannot be accurately determined from the input",
"province": "MY and NY are both possible in the provided address",
"country_iso2": "US"
}Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.
The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.
Techne is the Greek word for HAND.
Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.
I bet any OSINT person would have had my name and contact in half an hour.
I genuinely hope if you're a professional intelligence analyst it doesn't take a trained eye to distinguish Paris from Belgium. Genuinely every day there's articles like this. The post about college students at elite universities who can't read, tariff policy by random number generator, programmers who struggle to solve first semester CS problems, intelligence analysts who can't do something you can do if you play Geoguessr as a hobby. Are we just getting dumber every year? It feels like we're falling off a cliff over the last decade or so.
Like, the entire article boils down to "verify information and use critical thinking", you'd think someone working in intelligence and law enforcement which this author trains knows this when they get hired?
To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.
AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.
Maybe the article addresses that, I'm not permitted to read it, likely because I'm using IPv6.
Forensic Architecture is a decent counterexample, however. They've been using machine learning and computer synthesis techniques for years without dropping in quality.
This sort of lazy thinking doesn't miss a beat when it comes to take the opinions of an LLM at face value.
Why not? It sounds mostly the same. The motivations to believe AI, is exactly the same as the motivation to believe government officials and journalists.
These tools are brand new and have proven kinks (hallucinations, for example). But instead of being, rightly, in my view, skeptical, the majority of people completely buy into the hype and already have full automation bias when it comes to these tools. They blindly trust the output, and merrily push forth AI generated, incorrect garbage that they themselves have no expertise or ability to evaluate. It's like everyone is itching to buy a bridge.
In some sense, I suppose it's only natural. Much of the modern economy sustains itself on little more than hype and snake oil anyway, so I guess it's par for the course. Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye. Worse, they all hype it up even further. Makes me feel like the whole LLM business is some kind of ponzi scheme given how willingly users will schill for these products for nothing.
I doubt anyone can do it perfectly every time, it requires a posthuman level of objectivity and high level of information quality that hardly ever exists.
It is, but it adds disingenuous apologetic.
Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism:
Some really bad thing X is happening. Everyone knows X is happening.
There is evidence X is happening, But I am *not* arguing against X
because that would brand me a Luddite/outsider/naysayer.... and we
all know a LOT of money and influence (including my own salary)
rests on nobody talking about X.
Practically every article on the negative effects of smartphones or
social media printed in the past 20 years starts with the same chirpy
disavowal of the authors actual message. Something like;"Smartphones and social media are an essential part of modern life today... but"
That always sounds like those people who say "I'm not a racist, but..."
Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?
OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking
AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too