The following is an automatically-generated compilation of all talk pages for the Signpost issue dated 2024-10-19. For general Signpost discussion, see Wikipedia talk:Signpost.
It seems like The Signpost in this issue have swapped the "Book review" and "Humour" section. Please clean up the mess. After the whitchunt of another author and novel, this coveage falls into the other ditch, apologeticly promoting a more wanted narrative.Andrez1 (talk) 10:59, 20 October 2024 (UTC)
I'm always surprised when somebody suggests that book reviewers should not have an opinion. Every review of a novel that I've ever seen has the reviewer's opinion in there somewhere (with the possible exception of some one or two paragraph "reviews" that I vaguely remember). FWIW my opinion is clearly stated as opinion. A major overlap between my opinion here and on my review of the short story "Hoffman Wobble" is that I think both authors seem to know what they are writing about. I do like The Editors better, however, since I didn't have to read it five times to figure out what he is saying.
Of course, for all the differences in the 3 reviews here, they all give a generally positive view of the novel. Smallbones(smalltalk) 14:30, 20 October 2024 (UTC)
You are free to have your opinion. I have not have denied you that. My opinion on yours can be seen above, and in the "Humour" section of this number of this publication. As in "launch outrageous promotional campaigns." Andrez1 (talk) 15:06, 20 October 2024 (UTC)
Congratulations to the participants and coordinators of the project! It was great seeing these under-represented lists come through FLC, and I'm sure coordinators of the other review processes felt the same. Hope to see it again next time! --PresN 13:56, 19 October 2024 (UTC)
The DCWC coordinators will be looking this talk page, so please feel free to offer comments or feedback to bring to our attention, or add on at the contest talk page! —TechnoSquirrel69 (sigh) 14:20, 19 October 2024 (UTC)
Man, how could I miss this contest entirely? Many thanks to sawyer777, Ixtal and TechnoSquirrel69 for bringing this idea to life: I hope I'll be able to join in next year... because there will be a new edition, right? : D Oltrepier (talk) 17:35, 21 October 2024 (UTC)
There will indeed be — sometime next year, though we haven't decided on a date. Feel free to join the mailing list if you'd like a notification for it! —TechnoSquirrel69 (sigh) 17:39, 21 October 2024 (UTC)
Underdeveloped world? Really? “Underdeveloped” in what?
Using this term is derogatory and perpetuates negative stereotypes. It implies a hierarchy where certain countries are inherently superior to others, which is simply not accurate. The fact that the IMF uses these backward, colonial legacy terminologies doesn’t mean we should replicate them. So-called “underdeveloped” countries have rich cultures, strong communities, and innovative solutions to local challenges -- lessons that “developed” countries could learn from if they were humble enough. What happened to using acceptable terms like “Emerging Economies” and “Majority World”? --Masssly (talk) 15:32, 19 October 2024 (UTC)
i'm using the term in the same way the author of How Europe Underdeveloped Africa does. i do not appreciate the insinuation that i believe that "certain countries are inherently superior to others" or that i don't believe that exploited countries have rich cultures - that is precisely the opposite of the entire point of this contest which i co-coordinated. ... sawyer * he/they * talk 15:34, 19 October 2024 (UTC)
I don't believe that there was any bad faith, nor an insinuation that certain countries are inherently superior to others; but for now, I've changed the wording to 'emerging economies' and 'Global South'. I understand this is discussed in the body, but basing the use of 'underdeveloped' on a book written in 1970s does not reflect the language that should be used in the 2020s. The change I did is intended to be implemented as a temporary fix, and anyone is free to further change the wording. Svampesky (talk) 16:35, 19 October 2024 (UTC)
"global south" is a perfectly fine alternative for the title - indeed, it's what i initially suggested for the contest's name. however, How Europe Underdeveloped Africa is not merely a book from the 1970s, it's a foundational text still referenced today by scholars of colonialism and read in college classrooms. i used my language deliberately, to convey a specific meaning: that most of the countries that this contest focused on (& indeed the vast majority of articles improved during the contest were relating to Africa or Asia) have been underdeveloped by external colonial and capitalist forces, not through their own fault. however, apparently my meaning is not obvious to everyone. i have slightly tweaked your rewording to re-add "postcolonial" as it is, in my opinion, an essential aspect of the division which forms the basis of this contest. ... sawyer * he/they * talk 16:51, 19 October 2024 (UTC)
I agree that "Global South" is okay to use here. Just be aware that (as any such broad geographic term) is also not immune to Wikimedians endlessly discussing its connotations and precise definition. In particular, there is a rather hare-brained notion (seen just a few days ago on Foundation-l) that the term "Global South" was imposed by "Westerners" or is somehow super racist and colonialist, whereas in reality it had been promoted to its current wider usage by activists from, well, the Global South who were motivated by pretty much the kind of concerns Masssly outlines above about other terms that refer to development status (or imply a ranking, like Third World).
This Wikimedia affiliate - founded and led by non-white people from the Global South - continues to use it prominently: https://whoseknowledge.org/about-us/
i'm aware of the discourse and agree with your disagreement. personally i prefer the term "global south" but as i say in the article, there are practical reasons we ended up going with the IMF's obviously problematic "development" rankings - rather ironic that it's the IMF, isn't it? i also disagree with the suggestions for alternative names Masssly has made - "emerging" is, in my opinion, euphemistic and inaccurate, and "majority world" is incredibly vague. anyways, i'm fine with the changes that have been made and acknowledge that i could have used clearer terminology, but i absolutely stand by my above statements and reasoning. in the same vein, i would have a lot more patience for this discussion if it didn't start out with the insinuations made in the initial comment. ... sawyer * he/they * talk 18:44, 19 October 2024 (UTC)
On the ridiculous side, we might want to change the image I suggested for the landing (or table of contents) page. It is a painting of an old sailing ship grounded or iced in in Greenland, with the title including "global south" superimposed. Or just leave it - it's bound to catch the eye of the most discerning readers. It should be a warning to inexperienced Signposters not to go making major edits outside of the usual copyediting times. This should have been discussed before the deadline so that the article submitters could help choose the proper title, and I think the title of the contest would pretty well have to suffice here - we can't change that title.
The choice of name for the general category has long been awkward. Emerging economies, popular in the 1990s was awkward even then. How did anyone know which countries economies would actually emerge? And what to with those countries that obviously weren't emerging? Meanwhile China and India have emerged by many measures but were still included in this contest.
1st, 2nd, and 3rd worlds was more of a political grouping.
Under-developed made me think, in a climate change frame of mind, that the other group was "over-developed", i.e burning too much fossil fuels. Maybe "lesser developed" is a middle of the superhighway term. In any case, we shouldn't be making any major changes now. Check with the editor-in-chief if you're tempted. Smallbones(smalltalk) 22:17, 19 October 2024 (UTC)
i like the image. i think it's a bit subtly funny, and a nod towards one of the FAs (Qalaherriaq) which was submitted for the contest. i also agree that it should've been discussed before publication, but so it goes. regarding the rest of your comment, i agree. i think one of the benefits of the term "underdeveloped" is that it prompts questions: underdeveloped why, by whom, and compared to what? climate change is a good point of comparison here. i think we lose that with euphemistic terms like "emerging" which are very inaccurate to the material realities of many of the countries they're applied to—i'd hardly call the economies of e.g. Lebanon, Palestine, Myanmar, DR Congo, Sudan, or Niger "emerging". ... sawyer * he/they * talk 22:31, 19 October 2024 (UTC)
Whoops, looks like we've got a dark-mode issue here as well — the box numbers are rendering in white-on-white, since they were never given an explicit foreground color. I'll look at fixing up the TemplateStyles. FeRDNYC (talk) 15:25, 26 October 2024 (UTC)
Fixed, here. Not on signpost.news, as I haven't copied the updated CSS to the external.css file... but signpost.news doesn't have a dark-mode theme anyway, so it's not really needed there. FeRDNYC (talk) 15:38, 26 October 2024 (UTC)
Ah, The Signpost — where neutrality goes to die, and product placement thrives. Impressive consistency, if nothing else. Ktkvtsh (talk) 16:58, 19 October 2024 (UTC)
>>Note: This column contains European humour.[citation needed]Carrite (talk) 12:09, 20 October 2024 (UTC)
The Jewish Journal article on the WP page on Israeli apartheid is quite ridiculous, to be honest. Chill, not everyone who portrays Israel harshly is some sort of Hamas supporter. --Firestar464 (talk) 18:30, 19 October 2024 (UTC)
The WMF has dedicated a sizeable portion of its donations to activist programs unrelated to Wikipedia and its sister projects, and it isn't helpful to lump it in with the "Wokepedia" nonsense. See meta:Knowledge Equity Fund and the 2023 Signpost article on the issue. Thebiguglyalien (talk) 03:55, 22 October 2024 (UTC)
Since publication there was one nomination withdrawn, and the discussion period began is proceeding with 34 candidates. ☆ Bri (talk) 16:29, 22 October 2024 (UTC)
Technically we began the discussion period with 35 candidates, then 1 withdrew. There was also a withdrawal or two during the SecurePoll setup period. –Novem Linguae (talk) 19:32, 22 October 2024 (UTC)
Another nomination withdrawn, now 33 candidates. ☆ Bri (talk) 22:00, 23 October 2024 (UTC)
And another, now 32 candidates just a few minutes before voting may begin. ☆ Bri (talk) 23:47, 23 October 2024 (UTC)
There's another 24 hours of discussion phase, I think. –Novem Linguae (talk) 00:34, 24 October 2024 (UTC)
Oof, you are right. Voting starts 25 October. ☆ Bri (talk) 02:46, 24 October 2024 (UTC)
Knowledge Equity Fund
I oppose the Knoweledge equity fund with considerable passion. While these projects are admirable, they should not be getting money from the WMF. The Foundation's job is to keep the servers running, hire lawyers, and fix big bugs, not to be a philanthropist, especially given the community wasn't given input. I would point the WMF to the second resolution of Wikipedia:Village_pump (WMF)/Archive 6#Grants to organizations unrelated to supporting Wikimedia Projects. Is there somewhere central I can complain to the WMF about this? Cremastra (u — c) 01:43, 20 October 2024 (UTC)
Thanks, that seems like the central spot for whingeing. Cremastra (u — c) 13:50, 20 October 2024 (UTC)
@Cremastra and Novem Linguae: The best place to share your views is in The Signpost. The Wikimedia community has many questions about the program. I am aware of these because we studied the program for previous Signpost coverage, but did not manage to produce the story we wanted. It would be helpful to have a full Signpost article on program outcomes to date. I can support a writer or journalists, but cannot take the lead on the whole story.
If you want to a story, then here are some suggestions: Submit a question to community calls on Wednesday, October 23 (ESEAP friendly time: 10am UTC, Pacific friendly time: 4pm UTC); bring the answer back to The Signpost, and draft a basic outline of the program which contains the news of the answer to the question you asked in the context of third-round funding.
"We have therefore been encouraged to omit any identifying information in the specific pages we discuss". A commendable approach to ethics (even if, as noted, not perfect). Unlike some other cases I can think of... --Piotr Konieczny aka Prokonsul Piotrus| reply here 12:21, 19 October 2024 (UTC)
I would encourage you though to look beyond your personal experience (with a controversial paper whose central subject was the longtime impact of your and several other editors' activity on a particular historical topic area) and also consider the wider impact on open science practices here.
To be clear, my main problem with the statement quoted in the review is not that they e.g. leave out the specific user name of that editor who created five articles on English Wikipedia, detected by both tools as AI-generated, on contentious moments in Albanian history (btw, the paper goes further into the administrative actions taken against that user). I might have done the same. Rather, it is that they take this as an excuse not to adhere to the good practice (which has become more prevalent in much of quantitative Wikipedia research over the years) to publicly release the data that their paper's central conclusion is based on, which would include the output of the detectors for particular articles (without user names).
This not only prevents Wikipedians from using that data to improve Wikipedia (by reviewing and possibly deleting AI-generated Wikipedia content that the authors spent quite a bit of money on detecting - in the "Limitations" section, they describe their experiments as "costly"). It also makes it impossible for the community to discuss the performance of the AI detection method used by the paper in concrete examples (apart from those very few that were cherry-picked to be presented in the paper). After all, going back to the example of that paper from last year that (understandably) still seems very much on your mind, the fact that it had provided extensive concrete evidence for its claims across many specifically named articles and hundreds of footnotes was also what enabled you to dispute that evidence in lengthy rebuttals.
Regards, HaeB (talk) 17:39, 19 October 2024 (UTC) (Tilman)
While I concur that releasing data is a good practice we should encourage, I also believe we need to encourage the good practice of protecting the subject studied. In here, in all honesty, I think the authors should have replaced stuff like "Albanian" with "Fooian" and obscure other content. That said, I understand there we have to weight good of the project and research vs good of small number of people, and also, most likely most editors identifiable here would not have their real names connected to their accounts, but still, protecting research subject is an important ethical consideration, and compromising it leads to a slippery slope. Ethical guidelines exist for good reasons, after all (and the fact that they are often ignored is not something that we should be proud of, as a society, IMHO). All I am saying is that the authors tried to do it at least a bit more than in the case we are both familiar with, and that's a plus. Piotr Konieczny aka Prokonsul Piotrus| reply here 02:50, 20 October 2024 (UTC)
Again, I can sympathize with concerns about naming specific editors in a paper. (My general view is that like in the news media and in Wikipedia articles, decisions about highlighting such already public information should be justifiable based on the relevance of that information for the reader.)
Where we really seem to disagree is the claim that it is unethical to publish information like "Algorithm X outputs the score Y for article Z" (especially when, as in this case, tool X is already publicly available in some form, or when, as in this case, this information is clearly relevant and useful for the work that Wikipedians do). How do you feel about WMF-hosted tools like ORES offering public APIs for that purpose? Should those be taken down?
@HaeB I don't think this is where we disagree strongly. I think such tools are mostly ok. Naming editors or making them identifiable on wiki by their username, while it has some ethical issues, is not in the same league (ie. that worrisome, or even particularly problematic) when compared to outing (or disclosing the names of editors who outed themselves), particularly when said outing has malicious intent (malicious to some, at least, since for others, "the end justifies the means"). My point is that the authors of this paper tried to abide by the ethical standards, and did so in a way I consider passable, if not award-winning. Piotr Konieczny aka Prokonsul Piotrus| reply here 04:34, 29 October 2024 (UTC)
Regarding "The Rise of AI-Generated Content in Wikipedia" link, which randomly sampled "2,909 English Wikipedia articles created in August 2024", I am puzzled about several things:
Why aren't the data pools (table, top of page 2) exactly the same size - say, 2,500 each, since these data pools are samples of larger data sets?
The authors say that their August 2024 sample came from Special:NewPages, which - of course - doesn't include deleted pages. But it makes a big difference if the authors did real-time collection of data during August, or took a snapshot in (say) early September, and this isn't specified. [Footnote 1, the link to "data collection and evaluation code", might provide the answere, but it returns a 404 error message.]
Footnote 2 provides the source of the article's set of Wikipedia pages collected before March 2022, which are (from that source) datasets of "cleaned articles", stripping out "markdown and unwanted sections (references, etc.)". But the table at the top of page 3 includes "Footnotes per sentence" and "Outgoing links per word" - where did that information come from?
And speaking of that table, perhaps it's just me, but I find it extraordinarily hard to believe that new articles in August 2024 (with the sample limited to those over 100 words) contained, on average, 1.77 outgoing links per word. -- John Broughton(♫♫) 18:14, 19 October 2024 (UTC)
on average, 1.77 outgoing links per word - indeed. This nonsensical claim is one of the things that makes one wonder about the peer review process used by the "NLP for Wikipedia Workshop". (It also doesn't seem to be a mere typo, as the "per word" is reiterated in that table's caption and in different phrasing in footnote 4: We normalize by [...] word count.) Fortunately, for this secondary result the authors have actually released some partial data, providing the raw number of links per word calculated for each article (although again while withholding the information on how each article was classified by the two detectors, see also discussion above). It looks like at least for English, the numbers there are all below 1, as they should be. So the error must have happened later in the process. Again this also illustrates the value of adhering to open science practices by publishing replication data.
Another problem about this particular table (which I had left out of the review as too detailed, but which doesn't inspire confidence either): In the text they claim that Table 2 shows how, compared to all articles created in August 2024, AI-generated ones use fewer references. But in the table itself, that is not true for one of the four listed languages: In Italian, that number was actually higher for "AI-Detected Articles". Now, perhaps one could still support the overall claim using something like a multilevel regression analysis on the underlying data. But the authors don't do that, similar to how they hand-wave their way through various other issues in the paper.
where did that information come from? - Note that Table 2 appears to refer only to articles created in August 2024, so the absence of links in the 2022 dataset would not be a problem here. But yes, one could ask why they didn't vet their conclusion that AI-generated [Wikipedia articles] use fewer references and are less integrated into the Wikipedia nexus by calculating the same metrics for their March 2022 comparison articles.
Why aren't the data pools (table, top of page 2) exactly the same size - I mean, they didn't specify what sampling method they used, so one can't expect the resulting samples to have exactly the same size. But yes, it seems one of many unexamined researcher degrees of freedom in this paper. E.g. why did English Wikipedia end up with the smallest sample in the August 2024 dataset and the second-smallest for the per-March 2022 dataset? Did German, Italian and French Wikipedia have a higher number of new articles (of >=100 words) in August 2024 than English Wikipedia?
Footnote 1, the link to "data collection and evaluation code", might provide the answere, but it returns a 404 error message. Does it? The link [1] works for me right now. In an earlier draft of this review as posted here I had linked to [2], a link that afterwards turned 404 because one of the authors renamed the file from "recent_wiki_scraper.py" to "run_wiki_scrape.py" two days ago. The published version of the review uses a permalink (search for "scraping") which still works for me.
Regards, HaeB (talk) 01:51, 20 October 2024 (UTC) (Tilman)
There needs to be an RFC on the use of Artificial Intelligence formally consigning it to the dustbin and banning off those who use it. Even as we speak there are some in the Foundation who think it's a great idea to facilitate the use of AI so that drivebys find it easier to "contribute." Carrite (talk) 19:48, 19 October 2024 (UTC)
As already briefly mentioned in the review, such an RfC already happened, see our coverage in the Signpost: "Community rejects proposal to create policy about large language models". It's also worth noting that the use of Artificial Intelligence is a very broad term which includes things that have been widely accepted for many years, like ORES (which many editors including myself have used to revert thousands of vandalism edits), see e.g. Wikipedia:Artificial intelligence. Lastly, we need to keep in mind that AI-generated articles (as well as AI capabilities in general) are a moving target, with recent systems getting more reliable at generating Wikipedia-type articles than a simplistic ChatGPT prompt would achieve, see e.g. the previous "Recent research" issue: "Article-writing AI is less 'prone to reasoning errors (or hallucinations)' than human Wikipedia editors". Regards, HaeB (talk) 00:26, 20 October 2024 (UTC) (Tilman)
Only speaking for myself, but I would like to see more AI in terms of tools, both in terms of helping augment the power and reach and scope of existing admins to make up for their steep decline, and for use by content editors to help them check, verify, and prepare articles for creation and reviewing. This does not mean that I support AI tools that would write the articles, but could help editors check for errors and look for plagiarism. One thing I've been thinking about for a very long time is how most of our articles stand alone within their separate topics and disciplines, without showing how the subjects cross fields, and interact with other similar and not so similar ideas. One potential use of a future AI tool would be to help editors unify the collection of all knowledge and show how it all links together. Currently, our primitive category system attempts to do this, but on an almost imperceptible level that isn't expressed as content or as a visualization. How does all of this content link together? That's what I would like to see it used for, and then, if at all possible, create new knowledge from the unification of all the information. Right now, I can ask various different systems questions, but they don't seem to be able to give me an accurate or insightful answer into anything. As everyone already knows, the weakest link here is our search interface, which doesn't provide 1% of the potential answers that it could. Viriditas (talk) 00:55, 20 October 2024 (UTC)
At WikiConference North America, several senior WMF staff had a panel discussion with an invited outside expert about the use of AI on Wikipedia. The outside expert was quite concerned about the problems AI has with inventing seemingly-plausible-but-untrue facts and how this would impact article quality; the WMF staff, not so much... —Compassionate727(T·C) 00:01, 29 October 2024 (UTC)
Isn't GPTZero debunked as too inaccurate to use? –Novem Linguae (talk) 22:36, 19 October 2024 (UTC)
I was wondering the same thing. Viriditas (talk) 23:07, 19 October 2024 (UTC)
I think "debunked" is a bit too strong. But yes, there have long been concerns about its accuracy and false positive rates. I have myself advocated early on (February 2023) against relying on GPTZero and related tools for e.g. reviewing new pages, although WikiProject AI Cleanup just today weakened their previous "Automatic AI detectors like GPTZero are unreliable and should not be used" recommendation a little. It's also interesting that GPTZero themselves recently announced their goal [...] to move away from an all-or-nothing paradigm around AI writing towards a more nuanced one. An overall problem is that GenAI is has only been getting better and (presumably) harder to detect, and will quite likely continue to do so for a while.
As mentioned in the review, the authors of the paper seem broadly aware of these problems, but insist that they can work around them for their purposes. And to be fair, for a statistical analysis about the overall frequency of AI-generated articles the concerns are a bit different than when (e.g.) deciding about whether to delete an individual article or sanction an individual editor. Still, my overall impression is that they are way too cavalier with dismissing such concerns in the paper (they are not even mentioned in its "Limitations" section, and their sole attempt to validate their approach against a ground truth has too many limitations, some of which I tried to indicate in this review).
Regards, HaeB (talk) 00:47, 20 October 2024 (UTC) (Tilman)
I question the Wikipedia competency of anyone who refers to administrators as "moderators" as is done in this research paper. It's a fundamental misunderstanding of their role. Trainsandotherthings (talk) 16:27, 20 October 2024 (UTC)
@HaeB: I just wanted to thank you for the excellent job you've done on that lead story: it's a genuinely engaging analysis! Oltrepier (talk) 10:32, 21 October 2024 (UTC)