Jump to content

Research:Newsletter/2024/September

From Meta, a Wikimedia project coordination wiki


Wikimedia Research Newsletter

Vol: 14 • Issue: 09 • September 2024 [contribute] [archives]

Article-writing AI is less "prone to reasoning errors (or hallucinations)" than human Wikipedia editors

By: Tilman Bayer

"Wikicrow" AI less "prone to reasoning errors (or hallucinations)" than human Wikipedia editors when writing gene articles

[edit]

A preprint titled "Language Agents Achieve Superhuman Synthesis of Scientific Knowledge"[1] introduces

"PaperQA2, a frontier language model agent optimized for improved factuality, [which] matches or exceeds subject matter expert performance on three realistic [research] literature research tasks. PaperQA2 writes cited, Wikipedia-style summaries of scientific topics that are significantly more accurate than existing, human-written Wikipedia articles."

It was published by "FutureHouse", a San-Francisco-based nonprofit working on "Automating scientific discovery" (with a focus on biology). FutureHouse was launched last year with funding from former Google CEO Eric Schmidt (at which time it was anticipated it would spend about $20 million by the end of 2024). Generating Wikipedia-like articles about science topics is only one of the applications of "PaperQA2, FutureHouse's scientific RAG [retrieval-augmented generation] system", which is designed to aid researchers. (For example, FutureHouse also recently launched a website called "Has Anyone", described as a "minimalist AI tool to search if anyone has ever researched a given topic.")

In more detail, the researchers "engineered a system called WikiCrow, which generates cited Wikipedia-style articles about human protein-coding genes by combining several PaperQA2 calls on topics such as the structure, function, interactions, and clinical significance of the gene." Each call contributes a section of the resulting article (somewhat similar to another recent system, see our review: "STORM: AI agents role-play as 'Wikipedia editors' and 'experts' to create Wikipedia-like articles"). The prompts include the instruction to "Write in the style of a Wikipedia article, with concise sentences and coherent paragraphs".

With an average cost of $5.50, the generated articles tended to be longer than their Wikipedia counterparts and had higher quality, at least according to the paper's evaluation method:

We used WikiCrow to generate 240 articles on genes that already have non-stub Wikipedia articles to have matched comparisons. WikiCrow articles averaged 1219.0 ± 275.0 words (mean ± SD, N = 240), longer than the corresponding Wikipedia articles (889.6 ± 715.3 words). The average article was generated in 491.5 ± 324.0 seconds, and had an average cost of $4.48 ± $1.02 per article (including costs for search and LLM APIs). We compared WikiCrow and Wikipedia on 375 statements sampled from the 240 paired articles. [...] The initial article sampling excluded any Wikipedia articles that were "stubs" or incomplete articles. Statements were then shuffled and given, blinded, to human experts, who graded statements according to whether they were (1) cited and supported; (2) missing a citation; or (3) cited and unsupported. We found that WikiCrow had significantly fewer "cited and unsupported" statements than the paired Wikipedia articles (13.5% vs. 24.9%) (p = 0.0075, χ2 (1), N = 375 for all tests in this section). WikiCrow failed to cite sources at a 3.9x lower rate than human written articles, as only 3.5% of WikiCrow statements were uncited, vs. 13.6% for Wikipedia (p < 0.001). In addition, defining precision for WikiCrow as the ratio of cited and supported statements over all cited statements, we found that WikiCrow displayed significantly higher precision than human-written articles (86.1% vs. 71.2%, p = 0.0013).

For the judgment whether a particular statement was "supported" by the cited references, the concrete question asked to the graders (described as "expert researchers" in the paper) was:

"Is the information correct, as cited? In other words, is the information stated in the sentence correct according to the literature that it cites?"

In addition, among other more detailed instructions, the graders were advised to mark a statement correct as cited even if it was not directly supported by the source, as long as the statement consisted of "broad context" judged to be "undergraduate biology student common knowledge" (akin to an extreme interpretation of WP:BLUE).

The fact that these rating criteria appear to be more liberal than Wikipedia's own, combined with the well-known general reputation of LLMs for generating hallucinations, makes the "WikiCrow displayed significantly higher precision" result rather remarkable. The authors double-checked it by examining the data more closely:

The "cited and unsupported" evaluation category includes both inaccurate statements (e.g. true hallucinations or reasoning errors) and statements that are accurate with inappropriate citations. To investigate the nature of the errors in Wikipedia and WikiCrow further, we manually inspected all reported errors and attempted to classify the issues as follows: reasoning issues, i.e. the written information contradicts, over-extrapolates, or is unsupported by any included citations; attribution issues, i.e. the information is likely supported by another included source, but either the statement does not include the correct citation locally or the source is too broad (e.g. a database portal link); or trivial statements, which are true passages, but overly pedantic or unnecessary [...]. Surprisingly, we found that compared to Wikipedia, WikiCrow had significantly fewer reasoning errors (12 vs. 26, p = 0.0144, χ2 (1), N = 375) but a similar number of attribution errors (10 vs. 16, p = 0.21), suggesting that the improved factuality of WikiCrow over Wikipedia was largely due to improvements in reasoning.

A human writer and a creature with the head and wings of a crow, both sitting and typing on their own laptops, experiencing mild hallucinations (DALL-E illustration)
Very scientifically accurate depictions of hallucinations experienced by human editors (left) and WikiCrow (right). Not from the paper.

The authors caution that this result about Wikipedians "hallucinating" more frequently than AI is specific to their "WikiCrow" system (and the task of writing articles about genes), and must not be generalized to LLMs in general:

Although language models are clearly prone to reasoning errors (or hallucinations), in our task at least they appear to be less prone to such errors than Wikipedia authors or editors. This statement is specific to the agentic RAG setting presented here: language models like GPT-4 on their own, if asked to generate Wikipedia articles, would still be expected to hallucinate at high rates.

A previous, less capable version of the WikiCrow system had already been described in a December 2023 blog post, which discussed the motivation for focusing on the task of writing Wikipedia-like articles about genes in more detail. Rather than seeing it as an arbitrary benchmark demo for their LLM agent system (back then in its earlier version, PaperQA), the authors described it as being motivated by longstanding shortcomings of Wikipedia's gene coverage that are seriously hampering the work of researchers who have come to rely on Wikipedia:

If you've spent time in molecular biology, you have probably encountered the "alphabet soup" problem of genomics. Experiments in genomics uncover lists of genes implicated in a biological process, like MGAT5B and ADGRA3. Researchers turn to tools like Google, Uniprot or Wikipedia to learn more, as the knowledge of 20,000 human genes is too broad for any single human to understand. However, according to our count, only 3,639 of the 19,255 human protein-coding genes recognized by the HGNC have high-quality (non-stub) summaries on [English] Wikipedia; the other 15,616 lack pages or are incomplete stubs. Often, plenty is known about the gene, but no one has taken the time to write up a summary. This is part of a much broader problem today: scientific knowledge is hard to access, and often locked up in impenetrable technical reports. To find out about genes like MGAT5B and ADGRA3, you'd end up sinking hours into reading the primary literature.

[The 2023 version of] WikiCrow is a first step towards automated synthesis of human scientific knowledge. As a first demo, we used WikiCrow to generate drafts of Wikipedia-style articles for all 15,616 of the Human protein-coding genes that currently lack articles or have stubs, using information from full-text articles that we have access to through our academic affiliations. We estimate that this task would have taken an expert human ~60,000 hours total (6.8 working years). By contrast, WikiCrow wrote all 15,616 articles in a few days (about 8 minutes per article, with 50 instances running in parallel), drawing on 14,819,358 pages from 871,000 scientific papers that it identified as relevant in the literature.

These challenges of covering the large number of relevant genes are not news to Wikipedians working in this area. Back in 2011, several papers in a special issue of Nucleic Acids Research on databases had explored Wikipedia as a database for structured biological data, e.g. asking "how to get scientists en masse to edit articles" in this area, and presenting English Wikipedia's "Gene Wiki" taskforce (which is currently inactive). In a 2020 article in eLife, a group of 30 researchers and Wikidata contributors similarly "describe[d] the breadth and depth of the biomedical knowledge contained within Wikidata," including its coverage of genes in general ("Wikidata contains items for over 1.1 million genes and 940 thousand proteins from 201 unique taxa") and human genetic variants ("Wikidata currently contains 1502 items corresponding to human genetic variants, focused on those with a clear clinical or therapeutic relevance").[2] But it seems that at least from the point of view of the FutureHouse researchers, Wikidata's gene coverage is not a substitute for Wikipedia's, perhaps because it does not offer the same kind of factual coverage (see also the review of a related dissertation below).

The current paper is not peer-reviewed, but conveys credibility by disclosing ample detail about the methodology for building and evaluating the PaperQA2 and WikiCrow systems (also in an accompanying technical blog post), and by releasing the underlying source code and data. The PaperQA2 system is available as an open-source software package. (This includes a "Setting to emulate the Wikipedia article writing used in our WikiCrow publication". However, the paper cautions that the released version does not include some additional tools that were used, and in particular does not provide "access to non-local full-text literature searches", which are "often bound by licensing agreements".) The generated articles are available online in rendered form and as Markdown source (see full list below, with links to their Wikipedia counterparts for comparison). The annotated expert ratings have been published as well.

The authors acknowledge "previous work on unconstrained document summarization, where the document must be found and then summarized, and even writing Wikipedia-style articles with RAG" (i.e. the aforementioned STORM project). But they highlight that

"These studies have not compared directly against Wikipedia with human evaluation. Instead, they used either LLMs to judge or [like STORM] compared ROGUE (text overlap) against ground-truth summaries. Here, we measure directly against human-generated Wikipedia with subject [matter] expert grading."

The "crow" moniker (already used in a predecessor project called "ChemCrow",[supp 1] an LLM agent working on chemistry tasks) is inspired by the fact that "Crows can talk – like a parrot – but their intelligence lies in tool use."

List of gene articles generated by WikiCrow
Gene name (wikilinked) WikiCrow article (rendered) WikiCrow article (source)
w:ABCC1 [1] [2]
w:ACKR1 [3] [4]
w:ADCYAP1 [5] [6]
w:ADGRG4 [7] [8]
w:AGK [9] [10]
w:ALOX5 [11] [12]
w:ANGPT1 [13] [14]
w:ANLN [15] [16]
w:ANXA6 [17] [18]
w:AP1G1 [19] [20]
w:APOC3 [21] [22]
w:APRT [23] [24]
w:ATF1 [25] [26]
w:ATF2 [27] [28]
w:ATG16L1 [29] [30]
w:ATOX1 [31] [32]
w:ATPAF2 [33] [34]
w:AURKA [35] [36]
w:B3GAT1 [37] [38]
w:BAIAP2 [39] [40]
w:BMPR2 [41] [42]
w:BPGM [43] [44]
w:BPIFA2 [45] [46]
w:BPIFB4 [47] [48]
w:BRAF [49] [50]
w:BRIP1 [51] [52]
w:BSG [53] [54]
w:C5AR1 [55] [56]
w:CAD [57] [58]
w:CASP8 [59] [60]
w:CCDC188 [61] [62]
w:CCDC74A [63] [64]
w:CCDC78 [65] [66]
w:CCDC82 [67] [68]
w:CCL20 [69] [70]
w:CCNH [71] [72]
w:CD1D [73] [74]
w:CD36 [75] [76]
w:CD3E [77] [78]
w:CD4 [79] [80]
w:CD52 [81] [82]
w:CD80 [83] [84]
w:CDK7 [85] [86]
w:CDKN2A [87] [88]
w:CEL [89] [90]
w:CENPJ [91] [92]
w:CEP290 [93] [94]
w:CFAP299 [95] [96]
w:CHD2 [97] [98]
w:CHRNA7 [99] [100]
w:CKAP4 [101] [102]
w:CKM [103] [104]
w:CLPP [105] [106]
w:CPE [107] [108]
w:CREM [109] [110]
w:CRH [111] [112]
w:CSF2RB [113] [114]
w:CTSB [115] [116]
w:CXCR4 [117] [118]
w:CYP2B6 [119] [120]
w:DCLRE1C [121] [122]
w:DFFB [123] [124]
w:DIRAS3 [125] [126]
w:DPPA3 [127] [128]
w:DYSF [129] [130]
w:EEF1A1 [131] [132]
w:EGLN1 [133] [134]
w:ELK1 [135] [136]
w:ELN [137] [138]
w:EPO [139] [140]
w:ETS1 [141] [142]
w:ETV6 [143] [144]
w:EWSR1 [145] [146]
w:FABP7 [147] [148]
w:FAM120AOS [149] [150]
w:FAM193A [151] [152]
w:FAM83H [153] [154]
w:FAM98A [155] [156]
w:FBL [157] [158]
w:FBXO2 [159] [160]
w:FBXW10 [161] [162]
w:FGF9 [163] [164]
w:FMR1 [165] [166]
w:GAK [167] [168]
w:GFM1 [169] [170]
w:GLO1 [171] [172]
w:GNMT [173] [174]
w:GOLGA8H [175] [176]
w:GPER1 [177] [178]
w:GRIA2 [179] [180]
w:GRM2 [181] [182]
w:GSC [183] [184]
w:HBA1 [185] [186]
w:HBA2 [187] [188]
w:HCK [189] [190]
w:HDAC4 [191] [192]
w:HKDC1 [193] [194]
w:HNF1A [195] [196]
w:HP1BP3 [197] [198]
w:HTR5A [199] [200]
w:IDO1 [201] [202]
w:IFITM1 [203] [204]
w:IKZF2 [205] [206]
w:IL27 [207] [208]
w:IL4R [209] [210]
w:JAK2 [211] [212]
w:JAML [213] [214]
w:KCNE1 [215] [216]
w:KCNJ5 [217] [218]
w:KCNK2 [219] [220]
w:KHDRBS1 [221] [222]
w:KLKB1 [223] [224]
w:KLRK1 [225] [226]
w:KNG1 [227] [228]
w:LEF1 [229] [230]
w:LHX1 [231] [232]
w:MAP3K7 [233] [234]
w:MAP6 [235] [236]
w:MARCHF5 [237] [238]
w:MCL1 [239] [240]
w:MCM4 [241] [242]
w:MEF2C [243] [244]
w:MICOS13 [245] [246]
w:MINDY4 [247] [248]
w:MITF [249] [250]
w:MROH9 [251] [252]
w:MSH2 [253] [254]
w:MT-ATP6 [255] [256]
w:MT-CYB [257] [258]
w:MT-ND3 [259] [260]
w:MTA1 [261] [262]
w:MYL3 [263] [264]
w:MYO10 [265] [266]
w:MYOM1 [267] [268]
w:MYOM2 [269] [270]
w:NDUFA1 [271] [272]
w:NDUFAF3 [273] [274]
w:NEDD8 [275] [276]
w:NEFL [277] [278]
w:NFKBIA [279] [280]
w:NFKBID [281] [282]
w:NGLY1 [283] [284]
w:NPM1 [285] [286]
w:NPPB [287] [288]
w:NPY [289] [290]
w:NR1H4 [291] [292]
w:NSFL1C [293] [294]
w:NTRK3 [295] [296]
w:NUP214 [297] [298]
w:OGA [299] [300]
w:OPN4 [301] [302]
w:OPTN [303] [304]
w:ORC2 [305] [306]
w:OSBP [307] [308]
w:OXGR1 [309] [310]
w:P2RY12 [311] [312]
w:PCK2 [313] [314]
w:PCNT [315] [316]
w:PER2 [317] [318]
w:PIP [319] [320]
w:PITX2 [321] [322]
w:PLA2G4F [323] [324]
w:PLAU [325] [326]
w:PLIN1 [327] [328]
w:PLIN2 [329] [330]
w:PLIN5 [331] [332]
w:POLH [333] [334]
w:PPIB [335] [336]
w:PPOX [337] [338]
w:PPP2R1A [339] [340]
w:PPP2R2A [341] [342]
w:PRC1 [343] [344]
w:PRDM12 [345] [346]
w:PRKACA [347] [348]
w:PRR30 [349] [350]
w:PSMB1 [351] [352]
w:PSMB6 [353] [354]
w:PSMB9 [355] [356]
w:PSMD2 [357] [358]
w:PTGER4 [359] [360]
w:PVALB [361] [362]
w:RAD50 [363] [364]
w:RAD54L [365] [366]
w:RALB [367] [368]
w:RASEF [369] [370]
w:RHD [371] [372]
w:RPAP2 [373] [374]
w:RPGR [375] [376]
w:RRM2B [377] [378]
w:S100A2 [379] [380]
w:S100A9 [381] [382]
w:SALL4 [383] [384]
w:SATB1 [385] [386]
w:SBK3 [387] [388]
w:SEPTIN4 [389] [390]
w:SERPINA12 [391] [392]
w:SERPINC1 [393] [394]
w:SH2B1 [395] [396]
w:SIAH2 [397] [398]
w:SIGLEC8 [399] [400]
w:SLC22A6 [401] [402]
w:SLC6A4 [403] [404]
w:SOCS3 [405] [406]
w:SPG7 [407] [408]
w:SPTBN1 [409] [410]
w:STMN1 [411] [412]
w:STS [413] [414]
w:TAS1R1 [415] [416]
w:TBC1D30 [417] [418]
w:TCAP [419] [420]
w:TEDC2 [421] [422]
w:TERF2 [423] [424]
w:TFAP2A [425] [426]
w:THBS1 [427] [428]
w:TICAM1 [429] [430]
w:TLE1 [431] [432]
w:TMEM222 [433] [434]
w:TMEM239 [435] [436]
w:TMEM249 [437] [438]
w:TMEM50A [439] [440]
w:TMEM69 [441] [442]
w:TNFRSF11A [443] [444]
w:TP53BP1 [445] [446]
w:TRH [447] [448]
w:TSEN34 [449] [450]
w:TWIST1 [451] [452]
w:U2AF1 [453] [454]
w:UBC [455] [456]
w:UQCRC2 [457] [458]
w:VEZT [459] [460]
w:VIPR1 [461] [462]
w:WAS [463] [464]
w:XIAP [465] [466]
w:XRCC6 [467] [468]
w:ZNF804A [469] [470]

Notes:

  • The second column (the list of rendered articles) was obtained from the search box dropdown list at https://wikicrow.ai/ . The other two columns were derived from it.
  • Despite the paper's statement that these are "240 articles on genes that already have non-stub Wikipedia articles", the dropdown list appears to contain only 235, some of which don't seem to have an equivalent English Wikipedia article. (See also List of human protein-coding genes 1 etc.)

Using Wikipedia's categories and list pages to build a knowledge graph separate from Wikidata

[edit]

From the abstract of a dissertation titled "Exploiting semi-structured information in Wikipedia for knowledge graph construction":[3]

"[...] we address three main challenges in the field of automated knowledge graph construction using semi-structured data in Wikipedia as a data source. To create an ontology with expressive and fine-grained types, we present an approach that extracts a large-scale general-purpose taxonomy from categories and list pages in Wikipedia. We enhance the taxonomy's classes with axioms explicating their semantics. To increase the coverage of long-tail entities in knowledge graphs, we describe a pipeline of approaches that identify entity mentions in Wikipedia listings, integrate them into an existing knowledge graph, and enrich them with additional facts derived from the extraction context. As a result of applying the above approaches to semi-structured data in Wikipedia, we present the knowledge graph CaLiGraph. The graph describes more than 13 million entities with an ontology containing almost 1.3 million classes. To judge the value of CaLiGraph for practical tasks, we introduce a framework that compares knowledge graphs based on their performance on downstream tasks. We find CaLiGraph to be a valuable addition to the field of publicly available general-purpose knowledge graphs."

Why would one want to use Wikipedia as a source of structured data and build a new knowledge graph when Wikidata already exists? First, the thesis argues that Wikidata — even though it has surpassed other public knowledge graphs in the number of entitities — is still very incomplete, especially when it comes to information about long-tail topics:

"The trend of entities added to publicly available KGs in recent years indicates they are far from complete. The number of entities in Wikidata [195], for example, grew by 26% in the time from October 2020 (85M) to October 2023 (107M) [206]. Wikidata describes the largest number of entities and comprises – in terms of entities – other public KGs to a large extent [66]. Consequently, this challenge of incompleteness applies to all public KGs, particularly when it comes to less popular entities [44]. [...]

On the other hand, an automated process for extracting structured information from Wikipedia may not yet be reliable enough to import the result directly without manual review:

While the performance of Open Information Extraction (OIE) systems (i.e., systems that extract information from general web text) has improved in recent years [159, 97, 112], the quality of extracted information has not yet reached a level where integration into public KGs like Wikidata or DBpedia [104] should be done without further filtering. [...]
[...] first "picking low-hanging fruit" by focusing on premium sources like Wikipedia to build a high-quality KG is crucial as it can serve as a solid foundation for approaches that target more challenging data sources. The extracted information may then be used as an additional anchor to make sense of less structured data.

Chapter 3 ("Knowledge Graphs on the Web") contains detailed comparisons of Wikidata with other public knowledge graphs, with observations including the following:

The main focus of DBpedia is on persons (and their careers), as well as places, works, and species. Wikidata also strongly focuses on works (mainly due to the import of entire bibliographic datasets), while Cyc, BabelNet and NELL show a more diverse distribution. [...]

[...] Wikidata has the largest number of instances and the largest detail level in most classes. However, there are differences from class to class. While Wikidata contains a large number of works, YAGO is a good source of events. NELL often has fewer instances, but a larger level of detail, which can be explained by its focus on more prominent instances.
Wikidata contains about twice as many persons as DBpedia and YAGO [..., which] contain almost no persons which are not contained in Wikidata. In conclusion, combining Wikidata with DBpedia or YAGO for better coverage of the Person class would not be beneficial"

(see also an earlier paper co-authored by the author that was titled "Knowledge Graphs on the Web -- an Overview")

Briefly

[edit]

Other recent publications

[edit]

Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.

"Refining Wikidata Taxonomy using Large Language Models"

[edit]
A Wikidata taxonomy (from "city or town" to "entity") before and after refinement

From the abstract:[4]

"Wikidata is known to have a complex taxonomy, with recurrent issues like the ambiguity between instances and classes, the inaccuracy of some taxonomic paths, the presence of cycles, and the high level of redundancy across classes. Manual efforts to clean up this taxonomy are time-consuming and prone to errors or subjective decisions. We present WiKC, a new version of Wikidata taxonomy cleaned automatically using a combination of Large Language Models (LLMs) and graph mining techniques."

From the "Evaluation" section:

"As expected, WiKC is much simpler and much more concise than Wikidata taxonomy. Compared to WiKC, Wikidata taxonomy has a factor higher than 200 in the number of classes, and a factor higher than 10 in the average number of paths from an instance to the root class entity (Q35120)."
"WiKC consistently outperforms Wikidata across all depth ranges. WiKC shows significant accuracy gains at deeper levels (depth 10 or more), suggesting that WiKC has resolved many inconsistency issues in the lower levels of the Wikidata taxonomy."

"Psychiq and Wwwyzzerdd: Wikidata completion using Wikipedia"

[edit]
Video demonstrating the Wwwyzzerdd browser extension

From the abstract:[5]

"Hundreds of thousands of articles on English Wikipedia have zero or limited meaningful structure on Wikidata. Much work has been done in the literature to partially or fully automate the process of completing knowledge graphs, but little of it has been practically applied to Wikidata. This paper presents two interconnected practical approaches to speeding up the Wikidata completion task. The first is Wwwyzzerdd, a browser extension that allows users to quickly import statements from Wikipedia to Wikidata. Wwwyzzerdd has been used to make over 100 thousand edits to Wikidata. The second is Psychiq, a new model for predicting instance and subclass statements based on English Wikipedia articles. [...] One initial use is integrating the Psychiq model into the Wwwyzzerdd browser extension."

"Bridging Background Knowledge Gaps in Translation with Automatic Explicitation"

[edit]

From the paper:[6]

"Translations help people understand content written in another language. However, even correct literal translations do not fulfill that goal when people lack the necessary background to understand them. Professional translators incorporate explicitations to explain the missing context by considering cultural differences between source and target audiences. [...] For example, the name “Dominique de Villepin” may be well known in French community while totally unknown to English speakers in which case the translator may detect this gap of background knowledge between two sides and translate it as “the former French Prime Minister Dominique de Villepin” instead of just “Dominique de Villepin”. [...]
This work introduces techniques for automatically generating explicitations, motivated by WIKIEXPL, a dataset that we collect from Wikipedia and annotate with human translators. [...]

Our generation is grounded in Wikidata and Wikipedia—rather than free-form text generation—to prevent hallucinations and to control length or the type of explanation. For SHORT explicitations, we fetch a word from instance of or country of from Wikidata [...]. For MID, we fetch a description of the entity from Wikidata [...]. For LONG type, we fetch three sentences from the first paragraph of Wikipedia."


"Relevant Entity Selection: Knowledge Graph Bootstrapping [from Wikidata] via Zero-Shot Analogical Pruning"

[edit]

From the abstract:[7]

"Knowledge Graph Construction (KGC) can be seen as an iterative process starting from a high quality nucleus that is refined by knowledge extraction approaches in a virtuous loop. Such a nucleus can be obtained from knowledge existing in an open KG like Wikidata. However, due to the size of such generic KGs, integrating them as a whole may entail irrelevant content and scalability issues. We propose an analogy-based approach that starts from seed entities of interest in a generic KG, and keeps or prunes their neighboring entities. We evaluate our approach on Wikidata through two manually labeled datasets that contain either domain-homogeneous or -heterogeneous seed entities."


"Assembling Hyperpop: Genre Formation on Wikipedia"

[edit]

From the abstract:[8]

"By analyzing the edit history of Wikipedia’s ‘hyperpop’ page, we locate ongoing debates, controversies, and contestations that point to shaping forces around online genre formation. These potentially have a huge impact on how hyperpop is understood both inside and outside of the music community. In locating the most active editors of the hyperpop Wikipedia page and scrutinizing their edit histories as well as the discussions on the hyperpop page itself, we uncovered debates about artistic notability, biases toward specific sources, and attempts at associating or dissociating musical genre from non-musical identities (such as race, gender, and nationality)."

"After all, who invented the airplane? Multilingualism and grassroots knowledge production on Wikipedia"

[edit]

From the abstract:[9]

"Paradoxically, in each language [English/French/Portuguese Wikipedia], the airplane has a different inventor. Through online ethnography, this article explores the multilingual landscape of Wikipedia, looking not only at languages, but also at language varieties, and unpacking the intricate connections between language, country, and nationality in grassroots knowledge production online."

"Excerpt on first powered flights in the (Portuguese Wikipedia's) Avião article" (figure from the paper)

References

[edit]
  1. Michael D. Skarlinski and Sam Cox and Jon M. Laurent and James D. Braza and Michaela Hinks and Michael J. Hammerling and Manvitha Ponnapati and Samuel G. Rodriques and Andrew D. White (2024), Language Agents Achieve Superhuman Synthesis of Scientific Knowledge, San Francisco, CA: FutureHouse  / Code / Data (generated articles)
  2. Waagmeester, Andra; Stupp, Gregory; Burgstaller-Muehlbacher, Sebastian; Good, Benjamin M; Griffith, Malachi; Griffith, Obi L; Hanspers, Kristina; Hermjakob, Henning; Hudson, Toby S; Hybiske, Kevin; Keating, Sarah M; Manske, Magnus; Mayers, Michael; Mietchen, Daniel; Mitraka, Elvira; Pico, Alexander R; Putman, Timothy; Riutta, Anders; Queralt-Rosinach, Nuria; Schriml, Lynn M; Shafee, Thomas; Slenter, Denise; Stephan, Ralf; Thornton, Katherine; Tsueng, Ginger; Tu, Roger; Ul-Hasan, Sabah; Willighagen, Egon; Wu, Chunlei; Su, Andrew I (2020-03-17). Peter Rodgers, Chris Mungall (eds.). "Wikidata as a knowledge graph for the life sciences". eLife 9: –52614. ISSN 2050-084X. PMC 7077981. PMID 32180547. doi:10.7554/eLife.52614. 
  3. Heist, Nicolas (2024). Exploiting semi-structured information in Wikipedia for knowledge graph construction (Thesis). Universität Mannheim.  (dissertation)
  4. Peng, Yiwen; Bonald, Thomas; Alam, Mehwish (October 2024). "Refining Wikidata Taxonomy using Large Language Models". ACM International Conference on Information and Knowledge Management. Boise, Idaho, United States. doi:10.1145/3627673.3679156 (inactive 2024-09-29).  Code/data
  5. Erenrich, Daniel (2023-01-01). "Psychiq and Wwwyzzerdd: Wikidata completion using Wikipedia". Semantic Web. Preprint (Preprint): 1–14. ISSN 1570-0844. doi:10.3233/SW-233450. 
  6. Han, HyoJung; Boyd-Graber, Jordan Lee; Carpuat, Marine (2023-12-03), Bridging Background Knowledge Gaps in Translation with Automatic Explicitation, arXiv:2312.01308  / dataset
  7. Jarnac, Lucas; Couceiro, Miguel; Monnin, Pierre (2023-10-21). "Relevant Entity Selection: Knowledge Graph Bootstrapping via Zero-Shot Analogical Pruning". Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. CIKM '23. New York, NY, USA: Association for Computing Machinery. pp. 934–944. ISBN 9798400701245. doi:10.1145/3583780.3615030.  Closed access, preprint version: Jarnac, Lucas; Couceiro, Miguel; Monnin, Pierre (2023-10-21). "Relevant Entity Selection: Knowledge Graph Bootstrapping via Zero-Shot Analogical Pruning". Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. pp. 934–944. arXiv:2306.16296. doi:10.1145/3583780.3615030. 
  8. Bates, Eliot; Delphis, Sophie; Moraes, Romulo; Santoli, Julia (2024-09-08). "Assembling Hyperpop: Genre Formation on Wikipedia". Cultural Sociology: 17499755241264905. ISSN 1749-9755. doi:10.1177/17499755241264905.  Closed access
  9. Fians, Guilherme (2024-11-01). "After all, who invented the airplane? Multilingualism and grassroots knowledge production on Wikipedia". Language & Communication 99: 39–51. ISSN 0271-5309. doi:10.1016/j.langcom.2024.08.001. 
Supplementary references:
  1. M. Bran, Andres; Cox, Sam; Schilter, Oliver; Baldassari, Carlo; White, Andrew D.; Schwaller, Philippe (May 2024). "Augmenting large language models with chemistry tools". Nature Machine Intelligence 6 (5): 525–535. ISSN 2522-5839. doi:10.1038/s42256-024-00832-8. 


Wikimedia Research Newsletter
Vol: 14 • Issue: 09 • September 2024
About • Subscribe: Email WikiResearch on Twitter WikiResearch on Facebook WikiResearch on mastodon.social[archives][Signpost edition][contribute][research index]