Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections

dc.contributor.authorJiang, Ming
dc.contributor.authorD'Souza, Jennifer
dc.contributor.authorAuer, Sören
dc.contributor.authorDownie, J. Stephen
dc.contributor.departmentHuman-Centered Computing, School of Informatics and Computing
dc.date.accessioned2024-01-05T21:44:29Z
dc.date.available2024-01-05T21:44:29Z
dc.date.issued2021-11-02
dc.description.abstractThe rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight BERT-based classification models by focusing on three factors: (1) BERT variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained BERT is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.
dc.eprint.versionFinal published version
dc.identifier.citationJiang, M., D’Souza, J., Auer, S., & Downie, J. S. (2022). Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. International Journal on Digital Libraries, 23(2), 197–215. https://doi.org/10.1007/s00799-021-00313-y
dc.identifier.urihttps://hdl.handle.net/1805/37678
dc.language.isoen_US
dc.publisherSpringer
dc.relation.isversionof10.1007/s00799-021-00313-y
dc.relation.journalInternational Journal on Digital Libraries
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0
dc.sourcePublisher
dc.subjectDigital library
dc.subjectInformation extraction
dc.subjectScholarly text mining
dc.subjectSemantic relation classification
dc.subjectKnowledge graphs
dc.subjectNeural machine learning
dc.titleEvaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections
dc.typeArticle
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jian2022Evaluating-CCBY.pdf
Size:
1.38 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: