• grysbok@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 day ago

    Timing and request patterns. The increase in traffic coincided with the increase in AI in the marketplace. Before, we’d get hit by bots in waves and we’d just suck it up for a day. Now it’s constant. The request patterns are deep deep solr requests, with far more filters than any human would ever use. These are expensive requests and the results aren’t any more informative that just scooping up the nicely formatted EAD/XML finding aids we provide.

    And, TBH, I don’t care if it’s AI. I care that it’s rude. If the bots respected robots.txt then I’d be fine with them. They don’t and they break stuff for actual researchers.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      10
      ·
      edit-2
      1 day ago

      I mean number of pirates correlates with global temperature. That doesn’t mean causation.

      The rest of the indices would aso match for any archiving bot, or with any bit in search of big data. We must remember that big data is used for much more than AI. At the end of the day scraping is cheap, but very few companies in the world have access to the processing power to train that amount of data. That’s why it seems so illogical to me.

      We are seeing how many LLM models which are results of a full train, per year? Ten? twenty? Even if they update and retrain often it’s not compatible with the amount of request people are implying as AI scraping that would put services into dos risk. Specially when I would think that any AI company would not try to scrap the same data twice.

      I have also experience an increase in bot requests in my host. But I just think is a result of internet getting bigger, more people using internet with more diverse intentions, some ill some not. I’ve also experience a big increase on probing and attack attempts on general, and I don’t think it’s OpenAI trying some outdated Apache vulnerability on my server. Internet is just a bigger sea with more fish in it.

      • grysbok@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        I just looked at my log for this morning. 23% of my total requests were from the useragent GoogleOther. Other visitors include GPTBot, SemanticScholarBot, and Turnitin. That’s the crawlers that are still trying after I’ve had Anubis on the site for over a month. It was much, much worse before, when they could crawl the site, instead of being blocked.

        That doesn’t include the bots that lie about being bots. Looking back at an older screenshot of a monitors—I don’t have the logs themselves anymore—I seriously doubt I had 43,000 unique visitors using Windows per day in March.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          1 day ago

          Why would they request so many times a day the same data if the objective was AI model training. It makes zero sense.

          Also google bots obeys robots.txt so they are easy to manage.

          There may be tons of reasons google is crawling your website. From ad research to any kind of research. The only AI related use I can think of is RAG. But that would take some user requests aways because if the user got the info through the AI google response then they would not enter the website. I suppose that would suck for the website owner, but it won’t drastically increase the number of requests.

          But for training I don’t see it, there’s no need at all to keep constantly scraping the same web for model training.

          • grysbok@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 day ago

            Like I said, [edit: at one point] Facebook requested my robots.txt multiple times a second. You’ve not convinced me that bot writers care about efficiency.

            [edit: they’ve since stopped, possibly because now I give a 404 to anything claiming to be from facebook]

            • The Quuuuuill@slrpnk.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              You’ve not convinced me that bot writers care about efficiency.

              and why should bot writers care about efficiency when what they really care about is time. they’ll burn all your resources without regard simply because they’re not who’s paying

              • grysbok@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 day ago

                Yep, they’ll just burn taxpayer resources (me and my poor servers) because it’s not like they pay taxes anyway (assuming they are either a corporation or not based in the same locality as I am).

                There’s only one of me and if I’m working on keeping the servers bare minimum functional today I’m not working on making something more awesome for tomorrow. “Linux sysadmin” is only supposed to be up to 30% of my job.

                • grysbok@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 day ago

                  I mean, I enjoy linux sysadmining, but fighting bots takes time, experimentation, and research, and there’s other stuff I should be doing. For example, accessibility updates to our websites. But, accessibility doesn’t matter a lick if you can’t access the website anyway due to timeouts.