Oh, you’re one of those people. Enough said. (edit) By the way, Anubis’ author seems to be a big fan of machine learning and AI.
(edit 2 just because I’m extra cross that you don’t seem to understand this part)
Do you know what a web crawler does when a process finishes grabbing the response from the web server? Do you think it takes a little break to conserve energy and let all the other remaining processes do their thing? No, it spawns another bloody process to scrape the next hyperlink.
Some websites being under ddos attack =/= all sites are under constant ddos attack, nor it cannot exist without it.
First there’s a logic fallacy in there. Being used by does not mean it’s useful. Many companies use AI for some task, does that make AI useful? Not.
The logic it’s still there all anubis can do against ddos is raising a little the barrier before the site goes down. That’s call mitigation not protection. If you are targeted for a ddos that mitigation is not going to do much, and your site is going down regardless.
If a request is taking a full minute of user CPU time, it’s one hell of a mitigation, and anybody who’s not a major corporation or government isn’t going to shrug it off.
Precisely that’s my point. It fits a very small risk profile. People who is going to be ddosed but not by a big agent.
It’s not the most common risk profile. Usually ddos attacks are very heavy or doesn’t happen at all. These “half gas” ddos attacks are not really common.
I think that’s why when I read about Anubis is never in a context of ddos protection. It’s always on a context of “let’s fuck AI”, like this precise line of comments.
There’s heavy, and then there’s heavy. I don’t have any experience dealing with threats like this myself, so I can’t comment on what’s most common, but we’re talking about potentially millions of times more resources for the attacker than the defender here.
There is a lot of AI hype and AI anti-hype right now, that’s true.
I don’t think is millions. Take into account that a ddos attacker is not going to execute JavaScript code, at least not any competent one, so they are not going to run the PoW.
In fact the unsolicited and unwarned PoW does not provide more protection than a captcha again ddos.
The mitigation comes from the smaller and easier requests response by the server, so the number of requests to saturate the service must increase. How much? Depending how demanding the “real” website would be in comparison.
I doubt the answer is millions. And they would achieve the exact same result with a captcha without running literal malware on the clients.
Depending how demanding the “real” website would be in comparison. I doubt the answer is millions.
The one service I regularly see using something like this is Invidious. I can totally get how even a bit of bot traffic would make the host’s life really hard.
It’s true a captcha would achieve something similar, if we assume a captcha-solving AI has a certain minimum cost. That means typical users will have to do a lot more work, though, which is why creepy things like Cloudflare have become popular, and I’m not sure what the advantages are.
Cloudfare have a clear advantage in the sense that can put the door away from the host and can redistribute the attacks between thousands of servers. Also it’s able to analyze attacks from their position of being able to see half the internet so they can develop and implement very efficient block lists.
I’m the first one who is not fan of cloudfare though. So I use crowdsec which builds community blocklists based on user statistics.
PoW as a bot detection is not new. It has been around for ages, but it has never been popular because there have always been better ways to achieve the same or even better results. Captcha may be more user intrusive, but it can actually deflect bots completely (even the best AI could be unable to solve a well made captcha), while PoW only introduces a energy penalty expecting to act as deterrent.
My bet is that invidious is under constant Google attack by obvious reasons. It’s a hard situation to be overall. It’s true that they are a very particular usercase, with both a lot of users and bots interested in their content, a very resource heavy content, and also the target of one of the biggest corporations of the world. I suppose Anubis could act as mitigation there, at the cost of being less user friendly. And if youtube goes a do the same it would really made for a shitty experience.
It is literally happening. https://www.youtube.com/watch?v=cQk2mPcAAWo https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
It’s being used by some little-known entities like the LKML, FreeBSD, SourceHut, UNESCO, and the fucking UN, so I’m assuming it probably works well enough. https://policytoolbox.iiep.unesco.org/ https://xeiaso.net/notes/2025/anubis-works/
Oh, you’re one of those people. Enough said. (edit) By the way, Anubis’ author seems to be a big fan of machine learning and AI.
(edit 2 just because I’m extra cross that you don’t seem to understand this part)
Do you know what a web crawler does when a process finishes grabbing the response from the web server? Do you think it takes a little break to conserve energy and let all the other remaining processes do their thing? No, it spawns another bloody process to scrape the next hyperlink.
Some websites being under ddos attack =/= all sites are under constant ddos attack, nor it cannot exist without it.
First there’s a logic fallacy in there. Being used by does not mean it’s useful. Many companies use AI for some task, does that make AI useful? Not.
The logic it’s still there all anubis can do against ddos is raising a little the barrier before the site goes down. That’s call mitigation not protection. If you are targeted for a ddos that mitigation is not going to do much, and your site is going down regardless.
If a request is taking a full minute of user CPU time, it’s one hell of a mitigation, and anybody who’s not a major corporation or government isn’t going to shrug it off.
Precisely that’s my point. It fits a very small risk profile. People who is going to be ddosed but not by a big agent.
It’s not the most common risk profile. Usually ddos attacks are very heavy or doesn’t happen at all. These “half gas” ddos attacks are not really common.
I think that’s why when I read about Anubis is never in a context of ddos protection. It’s always on a context of “let’s fuck AI”, like this precise line of comments.
There’s heavy, and then there’s heavy. I don’t have any experience dealing with threats like this myself, so I can’t comment on what’s most common, but we’re talking about potentially millions of times more resources for the attacker than the defender here.
There is a lot of AI hype and AI anti-hype right now, that’s true.
I do. I have a client with a limited budget whose websites I’m considering putting behind Anubis because it’s getting hammered by AI scrapers.
It comes in waves, too, so the website may randomly go down or slow down significantly, which is really annoying because it’s unpredictable.
I don’t think is millions. Take into account that a ddos attacker is not going to execute JavaScript code, at least not any competent one, so they are not going to run the PoW.
In fact the unsolicited and unwarned PoW does not provide more protection than a captcha again ddos.
The mitigation comes from the smaller and easier requests response by the server, so the number of requests to saturate the service must increase. How much? Depending how demanding the “real” website would be in comparison. I doubt the answer is millions. And they would achieve the exact same result with a captcha without running literal malware on the clients.
The one service I regularly see using something like this is Invidious. I can totally get how even a bit of bot traffic would make the host’s life really hard.
It’s true a captcha would achieve something similar, if we assume a captcha-solving AI has a certain minimum cost. That means typical users will have to do a lot more work, though, which is why creepy things like Cloudflare have become popular, and I’m not sure what the advantages are.
Cloudfare have a clear advantage in the sense that can put the door away from the host and can redistribute the attacks between thousands of servers. Also it’s able to analyze attacks from their position of being able to see half the internet so they can develop and implement very efficient block lists.
I’m the first one who is not fan of cloudfare though. So I use crowdsec which builds community blocklists based on user statistics.
PoW as a bot detection is not new. It has been around for ages, but it has never been popular because there have always been better ways to achieve the same or even better results. Captcha may be more user intrusive, but it can actually deflect bots completely (even the best AI could be unable to solve a well made captcha), while PoW only introduces a energy penalty expecting to act as deterrent.
My bet is that invidious is under constant Google attack by obvious reasons. It’s a hard situation to be overall. It’s true that they are a very particular usercase, with both a lot of users and bots interested in their content, a very resource heavy content, and also the target of one of the biggest corporations of the world. I suppose Anubis could act as mitigation there, at the cost of being less user friendly. And if youtube goes a do the same it would really made for a shitty experience.