fwiw Anubis is working on a more respectful update, this was their first pass solution for what was basically a break glass emergency. i understand FSF’s concern, but Anubis is the only thing that’s making a free and open internet remotely possible right now, and far better it that nightmare fuel like cloudflare
A web server that can’t discriminate between a request made by a human and one made by a machine has to handle all requests. It may not be an issue for large companies like Amazon or Microsoft, but small websites will suffer timeouts and outages.
Without a locally hosted solution like Anubis, small websites would have to move behind a large centralized service like Cloudflare.
Otherwise they might not be able to continue operating and only large corporate-backed services like Twitter and Reddit would survive.
The alternative is having to choose between Reddit and Cloudflare. Does that look “free” and “open” to you?
It assumes that we sites are under constant ddos and that cannot exist if there is not ddos protection.
This is false.
It assumes that anubis is effective against ddos attacks. Which is not. Is a mitigation, but any ddos attack worth is name would not have any issue bringing down a site with anubis. As the sever still have to handle request even if they are smaller requests.
Anubis only use case is to make AI scrappers to consume more energy while scrapping, while also making many legitimate users also use more energy. It’s just being promoted in the anti-AI wave, but I don’t really see much usefulness into it.
Oh, you’re one of those people. Enough said. (edit) By the way, Anubis’ author seems to be a big fan of machine learning and AI.
(edit 2 just because I’m extra cross that you don’t seem to understand this part)
Do you know what a web crawler does when a process finishes grabbing the response from the web server? Do you think it takes a little break to conserve energy and let all the other remaining processes do their thing? No, it spawns another bloody process to scrape the next hyperlink.
Some websites being under ddos attack =/= all sites are under constant ddos attack, nor it cannot exist without it.
First there’s a logic fallacy in there. Being used by does not mean it’s useful. Many companies use AI for some task, does that make AI useful? Not.
The logic it’s still there all anubis can do against ddos is raising a little the barrier before the site goes down. That’s call mitigation not protection. If you are targeted for a ddos that mitigation is not going to do much, and your site is going down regardless.
If a request is taking a full minute of user CPU time, it’s one hell of a mitigation, and anybody who’s not a major corporation or government isn’t going to shrug it off.
Precisely that’s my point. It fits a very small risk profile. People who is going to be ddosed but not by a big agent.
It’s not the most common risk profile. Usually ddos attacks are very heavy or doesn’t happen at all. These “half gas” ddos attacks are not really common.
I think that’s why when I read about Anubis is never in a context of ddos protection. It’s always on a context of “let’s fuck AI”, like this precise line of comments.
There’s heavy, and then there’s heavy. I don’t have any experience dealing with threats like this myself, so I can’t comment on what’s most common, but we’re talking about potentially millions of times more resources for the attacker than the defender here.
There is a lot of AI hype and AI anti-hype right now, that’s true.
Websites were under a constant noise of malicious requests even before AI, but now AI scraping of Lemmy instances usually triples traffic. While some sites can cope with this, this means a three-fold increase in hosting costs in order to essentially fuel investment portfolios.
AI scrapers will already use as much energy as available, so making them use more per site measn less sites being scraped, not more total energy used.
And this is not DDoS, the objective of scrapers is to get the data, not bring the site down, so while the server must reply to all requests, the clients can’t get the data out without doing more work than the server.
AI does not triple traffic. It’s a completely irrational statement to make.
There’s a very limited number of companies training big LLM models, and these companies do train a model a few times per year. I would bet that the number of requests per year of s resource by an AI scrapper is on the dozens at most.
Using as much energy as a available per scrapping doesn’t even make physical sense. What does that sentence even mean?
You’re right. AI didn’t just triple the traffic to my tiny archive’s site. It way more than tripled it. After implementing Anubis, we went from 3000 ‘unique’ visitors down to 20 in a half-day. Twenty is a much more expected number for a small college archive in the summer. That’s before I did any fine-tuning to Anubis, just the default settings.
I was getting constant outage reports. Now I’m not.
For us, it’s not about protecting our IP. We want folks to get to find out information. That’s why we write finding aids, scan it, accession it. But, allowing bots to siphon it all up inefficiently was denying everyone access to it.
And if you think bots aren’t inefficient, explain why Facebook requests my robots.txt 10 times a second.
Timing and request patterns. The increase in traffic coincided with the increase in AI in the marketplace. Before, we’d get hit by bots in waves and we’d just suck it up for a day. Now it’s constant. The request patterns are deep deep solr requests, with far more filters than any human would ever use. These are expensive requests and the results aren’t any more informative that just scooping up the nicely formatted EAD/XML finding aids we provide.
And, TBH, I don’t care if it’s AI. I care that it’s rude. If the bots respected robots.txt then I’d be fine with them. They don’t and they break stuff for actual researchers.
AI does not triple traffic. It’s a completely irrational statement to make.
Multiple testimonials from people who host sites say they do. Multiple Lemmy instances also supported this claim.
I would bet that the number of requests per year of s resource by an AI scrapper is on the dozens at most.
You obviously don’t know much about hosting a public server. Try dozens per second.
There is a booming startup industry all over the world training AI, and scraping data to sell to companies training AI. It’s not just Microsoft, Facebook and Twitter doing it, but also Chinese companies trying to compete. Also companies not developing public models, but models for internal use. They all use public cloud IPs, so the traffic is coming from all over incessantly.
Using as much energy as a available per scrapping doesn’t even make physical sense. What does that sentence even mean?
It means that Microsoft buys a server for scraping, they are going to be running it 24/7, with the CPU/network maxed out, maximum power use, to get as much data as they can. If the server can scrape 100 sites per minute, it will scrape 100 sites. If it can scrape 1000, it will scrape 1000, and if it can do 10, it will do 10.
It will not stop scraping ever, as it is the equivalent of shutting down a production line. Everyone always uses their scrapers as much as they can. Ironically, increasing the cost of scraping would result in less energy consumed in total, since it would force companies to work more “smart” and less “hard” at scraping and training AI.
Oh, and it’s S-C-R-A-P-I-N-G, not scrapping. It comes from the word “scrape”, meaning to remove the surface from an object using a sharp instrument, not “scrap”, which means to take something apart for its components.
I’m not native English speaker. So I would apologize if there’s bad English in my response. And would thank any corrections.
That being said I do host public services, before and after AI was a thing. And I have asked many of these people who claim “we are under AI bot attacks” how are they able to differentiate when a request is from a AI scrapper or just any other scrapper and there was no satisfying answer.
Yeah but it doesn’t matter what the objective of the scraper is, the only thing that matters is that it’s an automated client that is going to send mass requests to you. If it wasn’t, Anubis would not be a problem for it.
The effect is the same, increased hosting costs and less access for legitimate clients. And sites want to defend against it.
That said, it is not mandatory, you can avoid using Anubis as a host. Nobody is forcing you to use it. And as someone who regularly gets locked out of services because I use a VPN, Anubis is one of the least intrusive protection methods out there.
No discrimination against fields of endeavor, like commercial use
You are removing the terms software and source. The code is freely available and to be open source should be usable for whatever purpose.
As an aside, it’s used by smaller sites frequently to prevent overwhelming scraping that could take down the site, which has become far more rampant recently due to AI bots
I’m not saying it’s not open source or free. I say that it does not contribute to make the web free and open. It really only contribute into making everyone waste more energy surfing the web.
The web is already too heavy we do NOT need PoW added to that.
I don’t think even a raspberry 2 would go down over a web scrap. And Anubis cannot protect from proper ddos so…
I don’t think even a raspberry 2 would go down over a web scrap
Absolutely depends on what software the server is running, if there’s proper caching involved. If running some PoW is involved to scrape 1 page it shouldn’t be too much of an issue, as opposed to just blindly following and ingesting every link.
Additionally, you can choose “good bots” like the internet archive, and they’re currently working on a list of “good bots”
AI companies ingesting data nonstop to train their models doesn’t make for a open and free internet, and will likely lead to the opposite, where users no longer even browse the web but trust in AI responses that maybe be hallucinated.
There a small number of AI companies training full LLM models. And they usually do a few trains per years. What most people see as “AI bots” are not actually that.
The influence of AI over the net is another topic. But anubis is also not doing anything about that as it just makes so the AI bots waste more energy getting the data or at most that data under “anubis protection” does not enter the training dataset. The AI will still be there.
Am I in the list of “good bots” ?sometimes I scrap websites for price tracking or change tracking. If I see a website running malware on my end I would most likely just block that site, one legitimate user less.
That’s outdated info. Yes, not a lot of scraping is really necessary for training. But LLMs are currently often coupled with web search to improve results.
So for example if you ask ChatGPT to find a specific product for you, the result doesn’t come from the model. Instead it does a web seach, then it loads the results, summarizes them and returns you the summary plus the links. This is a time-critical operation since the user is waiting for the results. It’s also a bad operation for the site being scraped in many situations (mostly when looking for info, not for products) since the user might be satisfied with the summary and won’t click the source.
So if you can delay scraping like that by a few seconds, that’s quite significant.
I (and A LOT) of lemmings already had enough of AI. We DON’T need AI-everything. So we block/make it harder for ai to be trained. We didn’t say “hey, please train your llm on our data” anyways.
Also it’s a little placebo. For instance Lemmy is not an Anubis usecase. As lemmy can be legitimately scrapped by any agent through the federation system. And I don’t really know how would even Anubis work with the openess of the Lemmy API.
fwiw Anubis is working on a more respectful update, this was their first pass solution for what was basically a break glass emergency. i understand FSF’s concern, but Anubis is the only thing that’s making a free and open internet remotely possible right now, and far better it that nightmare fuel like cloudflare
How does it factor in the “free” and “open”?
It seems to be more about IP protection that any other thing.
The alternative is having to choose between Reddit and Cloudflare. Does that look “free” and “open” to you?
That whole thing is under two wrong suppositions.
It assumes that we sites are under constant ddos and that cannot exist if there is not ddos protection.
This is false.
It assumes that anubis is effective against ddos attacks. Which is not. Is a mitigation, but any ddos attack worth is name would not have any issue bringing down a site with anubis. As the sever still have to handle request even if they are smaller requests.
Anubis only use case is to make AI scrappers to consume more energy while scrapping, while also making many legitimate users also use more energy. It’s just being promoted in the anti-AI wave, but I don’t really see much usefulness into it.
It is literally happening. https://www.youtube.com/watch?v=cQk2mPcAAWo https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
It’s being used by some little-known entities like the LKML, FreeBSD, SourceHut, UNESCO, and the fucking UN, so I’m assuming it probably works well enough. https://policytoolbox.iiep.unesco.org/ https://xeiaso.net/notes/2025/anubis-works/
Oh, you’re one of those people. Enough said. (edit) By the way, Anubis’ author seems to be a big fan of machine learning and AI.
(edit 2 just because I’m extra cross that you don’t seem to understand this part)
Do you know what a web crawler does when a process finishes grabbing the response from the web server? Do you think it takes a little break to conserve energy and let all the other remaining processes do their thing? No, it spawns another bloody process to scrape the next hyperlink.
Some websites being under ddos attack =/= all sites are under constant ddos attack, nor it cannot exist without it.
First there’s a logic fallacy in there. Being used by does not mean it’s useful. Many companies use AI for some task, does that make AI useful? Not.
The logic it’s still there all anubis can do against ddos is raising a little the barrier before the site goes down. That’s call mitigation not protection. If you are targeted for a ddos that mitigation is not going to do much, and your site is going down regardless.
If a request is taking a full minute of user CPU time, it’s one hell of a mitigation, and anybody who’s not a major corporation or government isn’t going to shrug it off.
Precisely that’s my point. It fits a very small risk profile. People who is going to be ddosed but not by a big agent.
It’s not the most common risk profile. Usually ddos attacks are very heavy or doesn’t happen at all. These “half gas” ddos attacks are not really common.
I think that’s why when I read about Anubis is never in a context of ddos protection. It’s always on a context of “let’s fuck AI”, like this precise line of comments.
There’s heavy, and then there’s heavy. I don’t have any experience dealing with threats like this myself, so I can’t comment on what’s most common, but we’re talking about potentially millions of times more resources for the attacker than the defender here.
There is a lot of AI hype and AI anti-hype right now, that’s true.
Websites were under a constant noise of malicious requests even before AI, but now AI scraping of Lemmy instances usually triples traffic. While some sites can cope with this, this means a three-fold increase in hosting costs in order to essentially fuel investment portfolios.
AI scrapers will already use as much energy as available, so making them use more per site measn less sites being scraped, not more total energy used.
And this is not DDoS, the objective of scrapers is to get the data, not bring the site down, so while the server must reply to all requests, the clients can’t get the data out without doing more work than the server.
AI does not triple traffic. It’s a completely irrational statement to make.
There’s a very limited number of companies training big LLM models, and these companies do train a model a few times per year. I would bet that the number of requests per year of s resource by an AI scrapper is on the dozens at most.
Using as much energy as a available per scrapping doesn’t even make physical sense. What does that sentence even mean?
You’re right. AI didn’t just triple the traffic to my tiny archive’s site. It way more than tripled it. After implementing Anubis, we went from 3000 ‘unique’ visitors down to 20 in a half-day. Twenty is a much more expected number for a small college archive in the summer. That’s before I did any fine-tuning to Anubis, just the default settings.
I was getting constant outage reports. Now I’m not.
For us, it’s not about protecting our IP. We want folks to get to find out information. That’s why we write finding aids, scan it, accession it. But, allowing bots to siphon it all up inefficiently was denying everyone access to it.
And if you think bots aren’t inefficient, explain why Facebook requests my robots.txt 10 times a second.
How do you know those reduced request were AI companies and not any other purpose?
Timing and request patterns. The increase in traffic coincided with the increase in AI in the marketplace. Before, we’d get hit by bots in waves and we’d just suck it up for a day. Now it’s constant. The request patterns are deep deep solr requests, with far more filters than any human would ever use. These are expensive requests and the results aren’t any more informative that just scooping up the nicely formatted EAD/XML finding aids we provide.
And, TBH, I don’t care if it’s AI. I care that it’s rude. If the bots respected robots.txt then I’d be fine with them. They don’t and they break stuff for actual researchers.
Does it matter what the purpose was? It was still causing them issues hosting their site.
Multiple testimonials from people who host sites say they do. Multiple Lemmy instances also supported this claim.
You obviously don’t know much about hosting a public server. Try dozens per second.
There is a booming startup industry all over the world training AI, and scraping data to sell to companies training AI. It’s not just Microsoft, Facebook and Twitter doing it, but also Chinese companies trying to compete. Also companies not developing public models, but models for internal use. They all use public cloud IPs, so the traffic is coming from all over incessantly.
It means that Microsoft buys a server for scraping, they are going to be running it 24/7, with the CPU/network maxed out, maximum power use, to get as much data as they can. If the server can scrape 100 sites per minute, it will scrape 100 sites. If it can scrape 1000, it will scrape 1000, and if it can do 10, it will do 10.
It will not stop scraping ever, as it is the equivalent of shutting down a production line. Everyone always uses their scrapers as much as they can. Ironically, increasing the cost of scraping would result in less energy consumed in total, since it would force companies to work more “smart” and less “hard” at scraping and training AI.
Oh, and it’s S-C-R-A-P-I-N-G, not scrapping. It comes from the word “scrape”, meaning to remove the surface from an object using a sharp instrument, not “scrap”, which means to take something apart for its components.
I’m not native English speaker. So I would apologize if there’s bad English in my response. And would thank any corrections.
That being said I do host public services, before and after AI was a thing. And I have asked many of these people who claim “we are under AI bot attacks” how are they able to differentiate when a request is from a AI scrapper or just any other scrapper and there was no satisfying answer.
Yeah but it doesn’t matter what the objective of the scraper is, the only thing that matters is that it’s an automated client that is going to send mass requests to you. If it wasn’t, Anubis would not be a problem for it.
The effect is the same, increased hosting costs and less access for legitimate clients. And sites want to defend against it.
That said, it is not mandatory, you can avoid using Anubis as a host. Nobody is forcing you to use it. And as someone who regularly gets locked out of services because I use a VPN, Anubis is one of the least intrusive protection methods out there.
Free software
https://www.gnu.org/philosophy/free-sw.en.html
Open source
https://en.wikipedia.org/wiki/The_Open_Source_Definition
You are removing the terms software and source. The code is freely available and to be open source should be usable for whatever purpose.
As an aside, it’s used by smaller sites frequently to prevent overwhelming scraping that could take down the site, which has become far more rampant recently due to AI bots
I’m not saying it’s not open source or free. I say that it does not contribute to make the web free and open. It really only contribute into making everyone waste more energy surfing the web.
The web is already too heavy we do NOT need PoW added to that.
I don’t think even a raspberry 2 would go down over a web scrap. And Anubis cannot protect from proper ddos so…
Absolutely depends on what software the server is running, if there’s proper caching involved. If running some PoW is involved to scrape 1 page it shouldn’t be too much of an issue, as opposed to just blindly following and ingesting every link.
Additionally, you can choose “good bots” like the internet archive, and they’re currently working on a list of “good bots”
https://github.com/TecharoHQ/anubis/blob/main/docs/docs/admin/policies.mdx
AI companies ingesting data nonstop to train their models doesn’t make for a open and free internet, and will likely lead to the opposite, where users no longer even browse the web but trust in AI responses that maybe be hallucinated.
There a small number of AI companies training full LLM models. And they usually do a few trains per years. What most people see as “AI bots” are not actually that.
The influence of AI over the net is another topic. But anubis is also not doing anything about that as it just makes so the AI bots waste more energy getting the data or at most that data under “anubis protection” does not enter the training dataset. The AI will still be there.
Am I in the list of “good bots” ?sometimes I scrap websites for price tracking or change tracking. If I see a website running malware on my end I would most likely just block that site, one legitimate user less.
That’s outdated info. Yes, not a lot of scraping is really necessary for training. But LLMs are currently often coupled with web search to improve results.
So for example if you ask ChatGPT to find a specific product for you, the result doesn’t come from the model. Instead it does a web seach, then it loads the results, summarizes them and returns you the summary plus the links. This is a time-critical operation since the user is waiting for the results. It’s also a bad operation for the site being scraped in many situations (mostly when looking for info, not for products) since the user might be satisfied with the summary and won’t click the source.
So if you can delay scraping like that by a few seconds, that’s quite significant.
I (and A LOT) of lemmings already had enough of AI. We DON’T need AI-everything. So we block/make it harder for ai to be trained. We didn’t say “hey, please train your llm on our data” anyways.
That’s legitimate.
But it’s not “open”, nor “free”.
Also it’s a little placebo. For instance Lemmy is not an Anubis usecase. As lemmy can be legitimately scrapped by any agent through the federation system. And I don’t really know how would even Anubis work with the openess of the Lemmy API.