

This is extremely interesting. How many magazines and newspapers are digitized in the way you can analyze them like that? This is a simple word-based analyze, also those texts can be enriched with metadata, e.g. mentions of people can be marked with their identifiers in Wikidata.
Some software solutions exist, e.g. War and Peace by Tolstoy can be downloaded with metadata, ids are assigned to all characters and when one character tells something to another, this is highlighted as “x speaks to y”, and you can run a community detection algorithms on this data. I think in the paper they’ve been mentioning some proprietary software. I suspect detecting who speaks to whom is even harder.
Also, some form of crowd sourcing probably should be possible. At least collecting scans is possible on wikisource and wikimedia commons.
Probably AI language models should be pretty good in distinguishing between linguistic ambiguities.
I dream for a time when such reports as in OP post will be a matter of work for an hour or two — because data will be already collected and clean.