• rtxn@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    2 days ago

    Correct. Anubis’ goal is to decrease the web traffic that hits the server, not to prevent scraping altogether. I should also clarify that this works because it costs the scrapers time with each request, not because it bogs down the CPU.

    • Xylight@lemdro.id
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Why not then just make it a setTimeout or something so that it doesn’t nuke the CPU of old devices?

      • rtxn@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Crawlers don’t have to follow conventions or specifications. If one has a setTimeout implementation that doesn’t wait the specified amount of time and simply executes the callback immediately, it defeats the system. Proof-of-work is meant to ensure that it’s impossible to get around the time factor because of computational inefficiency.

        Anubis is an emergency solution against the flood of scrapers deployed by massive AI companies. Everybody wishes it wasn’t necessary.