If your response to discussion is curses and insults, please also consider shutting down your instance while your at it.
Blind geek, fanfiction lover (Harry Potter and MLP). keyoxide: aspe:keyoxide.org:PFAQDLXSBNO7MZRNPUMWWKQ7TQ
If your response to discussion is curses and insults, please also consider shutting down your instance while your at it.
This is exactly my point, though. Yes, all of these changes are easy and possible. But you have to know about them, first. This is not a drop-in “protect everything without side effects” tool like your initial post seems to say. For every app you put behind it, you need to take time to think over exactly what access is required by whom, when, and how. Does it use Oauth, RSS, .well-known, xmlrpc/pingbacks, RDF/sparql endpoints, etc? Do some robots need to be allowed (for federation, discoverability, automated healthchecks, etc)? Are there consumers of API’s provided by the app? Will file downloads occur from downloaders that resume downloads/chunking for multiple connections at once? What is the profile of the humans you expect to be accessing the service: are they using terminal browsers like lynx, do they disable JavaScript and/or cookies for privacy, are they on a VPN, are they using low profile devices like raspberry pi’s or low-end android tablets, etc? What bots are you intending to block and how do they behave: they may just be running headless chrome and pass all your checks, they may be on zombie consumer machines part of a botnet, etc. As with anything in life, there are no magical shortcuts, and no way to say “block all the bad people I don’t like and allow the good people in” without first defining who the good people are and what you don’t like.
In your case, all you’ve effectively done is said “good people run JavaScript and allow cookies, bad people do not”. Without really thinking through the implications of that. I suspect what you really mean is “I don’t need or want anyone but me accessing my personal lemmy instance”. So why not block lemmy-ui from every country but your own, or even restrict it to subnets belonging to the ISPs you use? That would seem to be a lot easier in the case of a personal instance. In the case of a public instance like mine, though, the problem is much harder.
Good to know. But most RSS readers already pretend to be browsers, because otherwise many publications with misconfigured reverse proxies will block them from accessing the RSS feed. cbc.ca is a good example of this. Because deploying a web firewall is neither easy or trivial, unless you know exactly who needs to access what, when, and why. Most people, in my experience, do not.
In brief testing I get challenges before trying to load robots.txt on hosts running Anubis. I also see reports of it blocking OAuth flows and access to stuff like .well-known
And what about RSS? Favicons? OAuth? robots.txt? There are lots and lots of things that need to be accessed by automated programs without user intervention. It is not trivial to determine what these things might be. For your personal instance, go nuts. But no public instance should be doing this.
What about folks on low spec Android phones? Or folks who browse with JavaScript off? Every solution to block AI will block some percentage of humans.


It never went down here in Ottawa for me. Based on the tiny sample size of my circle of friends, everyone using the gigahub for PPPOE login went down, and everyone using PPPOE passthrough did not. So I think it’s something to do with the ISP provided router. Pushed bad firmware to everyone maybe?
How is this different from cosmos-cloud.io? The feature list looks identical.