I was hit with a pretty substantial botnet "distributed scraping" attack yesterday.
- About 400,000 different IP addresses over about 3 hours
- Mostly residential IP addresses
- Valid and unique user agents and referrers
- Each IP address would make only a few requests with a long delay in between requests
It would hit the server hard until the server became slow to respond, then it would back off for about 30 seconds, then hit hard again. I was able to block most of the requests with a combination of user agent and referrer patterns, though some legit users may be blocked.
The attack was annoying, but, the even bigger problem is that the data on this website is under license - we have to pay for it, and it's not cheap. We are able to pay for it (barely) with advertising revenue and some subscriptions.
If everyone is getting this data from their "agent" and scrapers, that means no advertising revenue, and soon enough no more website to scrape, jobs lost, nowhere for scrapers to scrape for the data, nowhere for legit users to get the data for free, etc.
In addition to pulling responses with huge amplification (40x, at least, for posting a single Facebook post to an empty audience), it's sending us traffic with fbclids in the mix. No idea why.
They're also sending tons of masked traffic from their ASN (and EC2), with a fully deceptive UserAgent.
The weirdest part though is that it's scraping mobile-app APIs associated with the site in high volume. We see a ton of other AI-training focused crawlers do this, but was surprised to see the sudden change in behavior on facebookexternalhit ... happened in the last week or so.
Everyone is nuts these days. Got DoSed by Amazonbot this month too. They refuse to tell me what happened, citing the competitive environment.
I'll implement Anubis at low difficulty for all my projects and leave a decent llms.txt referenced in my sitemap and robots.txt so LLMs can still get relevant data for my site while.keeping bad bots out. I'm getting thousands of requests from China that have really increased costs, glad it seems the fix is rather easy.
Is the theory here that OpenAI, Anthropic, Gemini, xAI, Qwen, Z.ai etc are all either running bad scrapers via domestic proxies in Indonesia, or are buying data from companies that run those scrapers?
I want to know for sure. Who is paying for this activity? What does the marketplace for scraped data look like?
I love experimental data like this. So much better than gut reaction that was spammed when anubis was just introduced
On Safari or Orion it is merely extremely slow to load.
I definitely wouldn't use any of this on a site that you don't want delisted for cryptojacking.
Very annoying. And you can't filter them because they look like legitimate trafic.
On a page with differents options (such as color, size, etc...) they'll try all the combinaisons, eating all the ressources.
If you have a logging stack, you can easily find crawler/bot patterns, then flag candidate IP subnets for blocking.
It's definitely whackamole though. We are experimenting with blocking based on risk databases, which run between $2k and $10k a year depending on provider. These map IP ranges to booleans like is_vpn, is_tor, etc, and also contain ASN information. Slightly suspicious crawling behavior or keyword flagging combined with a hit in that DB, and you have a high confidence block.
All this stuff is now easy to homeroll with claude. Before it would have been a major PITA.
What is the point of these anti bot measures if organic HN traffic can nuke your site regardless? If this is about protecting information from being acquired by undesirable parties, then this site is currently operating in the most ideal way possible.
The information will eventually be ripped out. You cannot defeat an army with direct access to TSMC's wafer start budget and Microsoft's cloud infrastructure. I would find a different hill to die on. This is exactly like the cookie banners. No one is winning anything here. Publishing information to the public internet is a binary decision. If you need to control access, you do what Netflix and countless others have done. You can't have it both ways.
It looks like it's computing sha256 hashes. Such an ASIC friendly PoW has the downside that someone with ASICs would be able to either overwhelm the site or drive up the difficulty so high that CPUs can never get through.
https://web.archive.org/web/20260329052632/https://gladeart....
Not even a 404, just not available at all.
> Here is a massive log file for some activity in the Data Export tar pit:
A bit of a privacy faux pas, no? Some visitors may be legitimate.
So I started searching for what these residential proxy networks actually are.
https://datadome.co/bot-management-protection/how-proxy-prov...
One of the mistakes people assume is that AI capability means humanness. If you know exactly where to look, you can start to identify differences between improving frontier models and human cognition.
One concrete example from a forthcoming blog post of mine:
[begin]
In fact, CAPTCHAs can still be effective if you know where to look.
We ran 75 trials -- 388 total attempts -- benchmarking three frontier AI agents against reCAPTCHA v2 image challenges. We looked across two categories: static, where each image grid is an individual target, and cross-tile challenges, where an object spans multiple tiles.
On static challenges, the agents performed respectably. Claude Sonnet 4.5 solved 47%. Gemini 2.5 Pro: 56%. GPT-5: 23%.
On cross-tile challenges: Claude scored 0%. Gemini: 2%. GPT-5: 1%.
In contrast, humans find cross-tile challenges easier than static ones. If you spot one tile that matches the target, your visual system follows the object into adjacent tiles automatically.
Agents find them nearly impossible. They evaluate each tile independently, produce perfectly rectangular selections, and fail on partial occlusion and boundary-spanning objects. They process the grid as nine separate classification problems. Humans process it as one scene.
The challenges hardest for humans -- ambiguous static grids where the target is small or unclear -- are easiest for agents. The challenges easiest for humans -- follow the object across tiles -- are hardest for agents. The difficulty curves are inverted. Not because agents are dumb, but because the two systems solve the problem with fundamentally different architectures.
Faking an output means producing the right answer. Faking a process means reverse-engineering the computational dynamics of a biological brain and reproducing them in real time. The first problem can be reduced to a machine learning classifier. The second is an unsolved scientific problem.
The standard objection is that any test can be defeated with sufficient incentive. But fraudsters weren't the ones who built the visual neural networks that defeated text CAPTCHAs -- researchers were. And they aren't solving quantum computing to undermine cryptography. The cost of spoofing an iris scan is an engineering problem. The cost of reproducing human cognition is a scientific one. These are not the same category of difficulty.
[end]
Is this actually a problem? Most of my requests claim to be Amazonbot but someone showed me they weren’t and I’ve forgotten how.
We need a better solution.
Maybe I’m a bot, I gave up waiting before the progress bar was even 1% done.
JA4 fingerprinting works decently for the residential proxies.
> "The idea is that at individual scales the additional load is ignorable, ..."
Three minutes, one pixel of progress bar, 2 CPUs at 100%, load average 4.3 ...
The site is not protected by Anubis, it's blocked by it.
Closed.
That’s how fast the landscape is changing.
And remember: while the report might have been released in 2024, it takes time to conduct research and publish. A good chunk of its data was likely from 2023 and earlier.
Good luck banning yourself from the future.
If a webstie takes so long to verify me I'll bounce. That's it.
I know / we know lol
I may be missing something of course