To provide a sustainable methodology for documenting the small (and underfunded) but often important university heritage collections. The sequence proposed by the DBLC (Database Life Cycle) (Coronel and Morris, Database Systems: Design, Implementation, & Management. Cengage Learning, Boston, 2018; Oppel Databases a beginner’s guide. McGraw-Hill, New York, 2009) is followed, focusing on the database design phase. The resulting proposals aim at harmonising the different documentation tools developed by GLAM institutions (acronym that aims to highlight the common aspects of Galleries, Libraries, Archives and Museums), all of which are present in the university environment. The work phases are based mainly on the work of Valle, Fernández Cacho, and Arenillas (Muñoz Cruz et al. Introducción a la documentación del patrimonio cultural. Consejería de Cultura de la Junta de Andalucía, Seville, 2017), combined with the experience acquired from the creation of the virtual museum at our institution. The creation of a working team that includes university staff members is recommended because we believe that universities have sufficient power to manage their own heritage. For documentation, we recommend the use of application profiles that consider the new trends in semantic web and LOD (Linked Open Data) and that are created using structural interchange standards such as Dublin Core, LIDO, or Darwin Core, which should be combined with content and value standards adapted from the GLAM area. The application of the methodology described above will make it possible to obtain quality metadata in a sustainable way given the limited resources of university collections. A proposed metadata schema is provided as an annex.
The two volumes of this Special Issue explore the intersections of digital libraries, epigraphy and paleography. Digital libraries research, practices and infrastructures have transformed the study of ancient inscriptions by providing organizing principles for collections building, defining interoperability requirements and developing innovative user tools and services. Yet linking collections and their contents to support advanced scholarly work in epigraphy and paleography tests the limits of current digital libraries applications. This is due, in part, to the magnitude and heterogeneity of works created over a time period of more than five millennia. The remarkable diversity ranges from the types of artifacts to the methods used in their production to the singularity of individual marks contained within them. Conversion of analogue collections to digital repositories is well underway—but most often not in a way that meets the basic requirements needed to support scholarly workflows. This is beginning to change as collections and content are being described more fully with rich annotations and metadata conforming to established standards. New use of imaging technologies and computational approaches are remediating damaged works and revealing text that has, over time, become illegible or hidden. Transcription of handwritten text to machine-readable form is still primarily a manual process, but research into automated transcription is moving forward. Progress in digital libraries research and practices coupled with collections development of ancient writtten works suggests that epigraphy and paleography will gain new prominence in the Academy.
We describe ongoing development for The Homer Multitext focusing on the interlocking challenges of automated analysis of diplomatic manuscript transcriptions. With the goal of lexical and morphological analysis of prose and poetry texts, and metrical analysis of poetic texts (and quotations thereof), we face the challenge of working generically across languages and across multiple possible orthographies in each language. In the case of Greek, our working dataset includes Greek following the conventions of Attica before 404 BCE, the conventions of “standard” literary polytonic Greek, and the particular conventions found in Byzantine codex manuscripts of Greek epic poetry with accompanying commentary. The latest work involves re-implementing existing CITE Architecture libraries in the Julia language, with documentation in the form of runnable code notebooks using the Pluto.jl framework. The Homer Multitext has been a work in progress for two decades. Because of the project’s emphasis on simple data formats (plain text, very simple XML, tabular lists), our data remain valid even as we gain understanding of the challenges posed by our source-material, particularly the 10th and 11th Century manuscripts of Greek epic poetry with accompanying ancient commentary that, within themselves, represent over a thousand years of linguistic evolution. The work outlined here represents the latest shift in our development tools, a flexibility likewise made possible by the separation of concerns that has been a central value in the project.
Historians and researchers rely on web archives to preserve social media content that no longer exists on the live web. However, what we see on the live web and how it is replayed in the archive are not always the same. In this study, we document and analyze the problems in archiving Twitter after Twitter switched to a new user interface (UI) in June 2020. Most web archives could not archive the new UI, resulting in archived Twitter pages displaying Twitter’s “Something went wrong” error. The challenges in archiving the new UI forced web archives to continue using the old UI. But, features such as Twitter labels were a part of the new UI; hence, web archives archiving Twitter’s old UI would be missing these labels. To analyze the potential loss of information in web archival data due to this change, we used the personal Twitter account of the 45th President of the USA, @realDonaldTrump, which was suspended by Twitter on January 8, 2021. Trump’s account was heavily labeled by Twitter for spreading misinformation; however, we discovered that there is no evidence in web archives to prove that some of his tweets ever had a label assigned to them. We also studied the possibility of temporal violations in archived versions of the new UI, which may result in the replay of pages that never existed on the live web. We also discovered that when some tweets with embedded media are replayed, portions of the rewritten t.co URL, meant to be hidden from the end-user, are partially exposed in the replayed page. Our goal is to educate researchers who may use web archives and caution them when drawing conclusions based on archived Twitter pages.
When searching within an academic digital library, a variety of information seeking strategies may be employed. The purpose of this study is to determine whether graduate students choose appropriate information seeking strategies for the complexity of a given search scenario and to explore among other factors that could influence their decisions. We used a survey method in which participants ( \(n=176\) ) were asked to recall their most recent instance of an academic digital library search session that matched two given scenarios (randomly chosen from four alternatives) and, for each scenario, identify whether they employed search strategies associated with four different information seeking models. Among the search strategies, only lookup search was used in a manner that was consistent with the complexity of the search scenario. Other factors that influenced the choice of strategy were the discipline of study and the type of academic search training received. Patterns of search tool use with respect to the complexity of the search scenarios were also identified. These findings highlight that not only is it important to train graduate students on how to conduct academic digital library searches, more work is needed to train them on matching the information seeking strategies to the complexity of their search tasks and developing interfaces that guide their search process.
The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Recent work has tried to address this problem by developing methods for automated summarization in the scholarly domain, but concentrated so far only on monolingual settings, primarily English. In this paper, we consequently explore how state-of-the-art neural abstract summarization models based on a multilingual encoder–decoder architecture can be used to enable cross-lingual extreme summaries of scholarly texts. To this end, we compile a new abstractive cross-lingual summarization dataset for the scholarly domain in four different languages, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage pipeline approach that independently summarizes and translates, as well as a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios. Finally, we investigate how to make our approach more efficient on the basis of knowledge distillation methods, which make it possible to shrink the size of our models, so as to reduce the computational complexity of the summarization inference.
Digital Library Systems are widely used in the Higher Education sector, through the use of Institutional Repositories (IRs), to collect, store, manage and make available scholarly research output produced by Higher Education Institutions (HEIs). This wide application of IRs is a direct response to the increase in scholarly research output produced. In order to facilitate discoverability of digital content in IRs, accurate, consistent and comprehensive association of descriptive metadata to digital objects during ingestion into IRs is crucial. However, due to human errors resulting from complex IR ingestion workflows, most digital content in IRs have incorrect and inconsistent descriptive metadata. While there exists a broad spectrum of descriptive metadata elements, subject headings present a classic example of a crucial metadata element that adversely affects discoverability of digital content when incorrectly and inconsistently specified. This paper outlines a case study conducted at an HEI—The University of Zambia—in order to demonstrate the effectiveness of integrating controlled subject vocabularies during the ingestion of digital objects in to IRs. A situational analysis was conducted to understand how subject headings are associated with digital objects and to analyse subject headings associated with already ingested digital objects. In addition, an exploratory study was conducted to determine domain-specific subject headings to be integrated with the IR. Furthermore, a usability study was conducted in order to comparatively determine the usefulness of using controlled vocabularies during the ingestion of digital objects into IRs. Finally, multi-label classification experiments were carried out where digital objects were assigned with more than one class. The results of the study revealed that a noticeable number of digital content is associated with incorrect subject categories and, additionally, associated with few subjects headings: two or less subject headings (71.2 \(\%\) ), with a significant number of subject headings (92.1 \(\%\) ) being associated with a single publication. A comparative study conducted suggests that IRs integrated with controlled vocabularies are perceived to be more usable (SUS Score = 68.9) when compared with IRs without controlled vocabularies (SUS Score = 66.2). Furthermore, the effectiveness of the multi-label arXiv subjects classifier demonstrates the viability of integrating automated techniques for subject classification.
In the past two decades, digital libraries (DL) have increasingly supported computational studies of digitized books (Jett et al. The hathitrust research center extracted features dataset (2.0), 2020; Underwood, Distant horizons: digital evidence and literary change, University of Chicago Press, Chicago, 2019; Organisciak et al. J Assoc Inf Sci Technol 73:317–332, 2022; Michel et al. Science 331:176–182, 2011). Nonetheless, there remains a dearth of DL data provisions or infrastructures for research on book reception, and user-generated book reviews have opened up unprecedented research opportunities in this area. However, insufficient attention has been paid to real-world complexities and limitations of using these datasets in scholarly research, which may cause analytical oversights (Crawford and Finn, Geo J 80:491–502, 2015), methodological pitfalls (Olteanu et al. Front Big Data 2:13, 2019), and ethical concerns (Hu et al. Research with user-generated book review data: legal and ethical pitfalls and contextualized mitigations, Springer, Berlin, 2023; Diesner and Chin, Gratis, libre, or something else? regulations and misassumptions related to working with publicly available text data, 2016). In this paper, we present three case studies that contextually and empirically investigate book reviews for their temporal, cultural, and socio-participatory complexities: (1) a longitudinal analysis of a ranked book list across ten years and over one month; (2) a text classification of 20,000 sponsored and 20,000 non-sponsored books reviews; and (3) a comparative analysis of 537 book ratings from Anglophone and non-Anglophone readerships. Our work reflects on both (1) data curation challenges that researchers may encounter (e.g., platform providers’ lack of bibliographic control) when studying book reviews and (2) mitigations that researchers might adopt to address these challenges (e.g., how to align data from various platforms). Taken together, our findings illustrate some of the sociotechnical complexities of working with user-generated book reviews by revealing the transiency, power dynamics, and cultural dependency in these datasets. This paper explores some of the limitations and challenges of using user-generated book reviews for scholarship and calls for critical and contextualized usage of user-generated book reviews in future scholarly research.
For populating Scientific Knowledge Graphs (SciKGs), research publications pose a central information source. However, typical forms of research publications like traditional papers do not provide means of integrating contributions into SciKGs. Furthermore, they do not support making direct use of the rich information SciKGs provide. To tackle this, the present paper proposes RDFtex, a framework enabling (1) the import of contributions represented in SciKGs to facilitate the preparation of -based research publications and (2) the export of original contributions from papers to facilitate their integration into SciKGs. The framework’s functionality is demonstrated using the present paper itself since it was prepared with our proof-of-concept implementation of RDFtex. The runtime of the implementation’s preprocessor was evaluated based on three projects with different numbers of imports and exports. A small user study ( \(N=10\) ) was conducted to obtain initial user feedback. The concept and the process of preparing a -based research publication using RDFtex are discussed thoroughly. RDFtex’s import functionality takes considerably more time than its export functionality. Nevertheless, the entire preprocessing takes only a fraction of the time required to compile the PDF. The users were able to solve all predefined tasks but preferred the import functionality over the export functionality because of its general simplicity. RDFtex is a promising approach to facilitate the move toward knowledge graph augmented research since it only introduces minor differences compared to the preparation of traditional -based publications while narrowing the gap between papers and SciKGs.
Our civilization creates enormous volumes of digital data, a substantial fraction of which is preserved and made publicly available for present and future usage. Additionally, historical born-analog records are progressively being digitized and incorporated into digital document repositories. While professionals often have a clear idea of what they are looking for in document archives, average users are likely to have no precise search needs when accessing available archives (e.g., through their online interfaces). Thus, if the results are to be relevant and appealing to average people, they should include engaging and recognizable material. However, state-of-the-art document archival retrieval systems essentially use the same approaches as search engines for synchronic document collections. In this article, we develop unique ranking criteria for assessing the usefulness of archived contents based on their estimated relationship with current times, which we call contemporary relevance. Contemporary relevance may be utilized to enhance access to archival document collections, increasing the likelihood that users will discover interesting or valuable material. We next present an effective strategy for estimating contemporary relevance degrees of news articles by utilizing learning to rank approach based on a variety of diverse features, and we then successfully test it on the New York Times news collection. The incorporation of the contemporary relevance computation into archival retrieval systems should enable a new search style in which search results are meant to relate to the context of searchers’ times, and by this have the potential to engage the archive users. As a proof of concept, we develop and demonstrate a working prototype of a simplified ranking model that operates on the top of the Portuguese Web Archive portal (arquivo.pt).
Comprehending communication is dependent on analyzing the different modalities of conversation, including audio, visual, and others. This is a natural process for humans, but in digital libraries, where preservation and dissemination of digital information are crucial, it is a complex task. A rich conversational model, encompassing all modalities and their co-occurrences, is required to effectively analyze and interact with digital information. Currently, the analysis of co-speech gestures in videos is done through manual annotation by linguistic experts based on textual searches. However, this approach is limited and does not fully utilize the visual modality of gestures. This paper proposes a visual gesture retrieval method using a deep learning architecture to extend current research in this area. The method is based on body keypoints and uses an attention mechanism to focus on specific groups. Experiments were conducted on a subset of the NewsScape dataset, which presents challenges such as multiple people, camera perspective changes, and occlusions. A user study was conducted to assess the usability of the results, establishing a baseline for future gesture retrieval methods in real-world video collections. The results of the experiment demonstrate the high potential of the proposed method in multimodal communication research and highlight the significance of visual gesture retrieval in enhancing interaction with video content. The integration of visual similarity search for gestures in the open-source multimedia retrieval stack, vitrivr, can greatly contribute to the field of computational linguistics. This research advances the understanding of the role of the visual modality in co-speech gestures and highlights the need for further development in this area.
Finding a suitable open access journal to publish academic work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of predatory publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. A systematic requirements analysis was conducted in the form of a survey. The developed tool suggests open access journals based on title, abstract and references provided by the user. The recommendations are built on open data, publisher-independent and work across domains and languages. Transparency is provided by its open source nature, an open application programming interface (API) and by specifying which matches the shown recommendations are based on. The recommendation quality has been evaluated using two different evaluation techniques, including several new recommendation methods. We were able to improve the results from our previous paper with a pre-trained transformer model. The beta version of the tool received positive feedback from the community and in several test sessions. We developed a recommendation system for open access journals to help researchers find a suitable journal. The open tool has been extensively tested, and we found possible improvements for our current recommendation technique. Development by two German academic libraries ensures the longevity and sustainability of the system.
One key frontier of artificial intelligence (AI) is the ability to comprehend research articles and validate their findings, posing a magnanimous problem for AI systems to compete with human intelligence and intuition. As a benchmark of research validation, the existing peer-review system still stands strong despite being criticized at times by many. However, the paper vetting system has been severely strained due to an influx of research paper submissions and increased conferences/journals. As a result, problems, including having insufficient reviewers, finding the right experts, and maintaining review quality, are steadily and strongly surfacing. To ease the workload of the stakeholders associated with the peer-review process, we probed into what an AI-powered review system would look like. In this work, we leverage the interaction between the paper’s full text and the corresponding peer-review text to predict the overall recommendation score and final decision. We do not envisage AI reviewing papers in the near future. Still, we intend to explore the possibility of a human–AI collaboration in the decision-making process to make the current system FAIR. The idea is to have an assistive decision-making tool for the chairs/editors to help them with an additional layer of confidence, especially with borderline and contrastive reviews. We use a deep attention network between the review text and paper to learn the interactions and predict the overall recommendation score and final decision. We also use sentiment information encoded within peer-review texts to guide the outcome further. Our proposed model outperforms the recent state-of-the-art competitive baselines. We release the code of our implementation here: https://github.com/PrabhatkrBharti/PEERRec.git.
This special issue brings together three areas of research and scholarly work areas that would have demonstrated few obvious relationships three decades ago. Digital libraries research, practices and infrastructures have transformed the study of ancient inscriptions by providing organizing principles for collections building, defining interoperability requirements and developing innovative user tools and services. Yet linking collections and their contents to support advanced scholarly work in epigraphy and paleography tests the limits of current digital libraries applications. This is due, in part, to the magnitude and heterogeneity of works created over a time period of more than five millennia. The remarkable diversity ranges from the number of types of artifacts to the methods used in their production to the singularity of individual marks contained within them. Conversion of analog collections to digital repositories is well underway—but most often not in a way that meets the basic requirements needed to support scholarly workflows. This is beginning to change. In addition to efforts to develop complex data objects, linking strategies and repositories aggregation, there is a new use of imaging technologies and computational approaches to recognize, enhance, recover and restore writings. Most recently, leading-edge artificial intelligence methods are being applied for the automated transcription of handwritten text into machine readable forms. The articles in this special issue will give examples of each.
Decisions in agriculture are increasingly data-driven. However, valuable agricultural knowledge is often locked away in free-text reports, manuals and journal articles. Specialised search systems are needed that can mine agricultural information to provide relevant answers to users’ questions. This paper presents AgAsk—an agent able to answer natural language agriculture questions by mining scientific documents. We carefully survey and analyse farmers’ information needs. On the basis of these needs, we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question. We implement and evaluate a number of information retrieval models to answer farmers questions, including two state-of-the-art neural ranking models. We show that neural rankers are highly effective at matching passages to questions in this context. Finally, we propose a deployment architecture for AgAsk that includes a client based on the Telegram messaging platform and retrieval model deployed on commodity hardware. The test collection we provide is intended to stimulate more research in methods to match natural language to answers in scientific documents. While the retrieval models were evaluated in the agriculture domain, they are generalisable and of interest to others working on similar problems. The test collection is available at: https://github.com/ielab/agvaluate.
The purpose of this work is to describe the orkg-Leaderboard software designed to extract leaderboards defined as task–dataset–metric tuples automatically from large collections of empirical research papers in artificial intelligence (AI). The software can support both the main workflows of scholarly publishing, viz. as LaTeX files or as PDF files. Furthermore, the system is integrated with the open research knowledge graph (ORKG) platform, which fosters the machine-actionable publishing of scholarly findings. Thus, the systemsss output, when integrated within the ORKG’s supported Semantic Web infrastructure of representing machine-actionable ‘resources’ on the Web, enables: (1) broadly, the integration of empirical results of researchers across the world, thus enabling transparency in empirical research with the potential to also being complete contingent on the underlying data source(s) of publications; and (2) specifically, enables researchers to track the progress in AI with an overview of the state-of-the-art across the most common AI tasks and their corresponding datasets via dynamic ORKG frontend views leveraging tables and visualization charts over the machine-actionable data. Our best model achieves performances above 90% F1 on the leaderboard extraction task, thus proving orkg-Leaderboards a practically viable tool for real-world usage. Going forward, in a sense, orkg-Leaderboards transforms the leaderboard extraction task to an automated digitalization task, which has been, for a long time in the community, a crowdsourced endeavor.
Information extraction can support novel and effective access paths for digital libraries. Nevertheless, designing reliable extraction workflows can be cost-intensive in practice. On the one hand, suitable extraction methods rely on domain-specific training data. On the other hand, unsupervised and open extraction methods usually produce not-canonicalized extraction results. This paper is an extension of our original work and tackles the question of how digital libraries can handle such extractions and whether their quality is sufficient in practice. We focus on unsupervised extraction workflows by analyzing them in case studies in the domains of encyclopedias (Wikipedia), Pharmacy, and Political Sciences. As an extension, we analyze the extractions in more detail, verify our findings on a second extraction method, discuss another canonicalizing method, and give an outlook on how non-English texts can be handled. Therefore, we report on opportunities and limitations. Finally, we discuss best practices for unsupervised extraction workflows.
Through the annals of time, writing has slowly scrawled its way from the painted surfaces of stone walls to the grooves of inscriptions to the strokes of quill, pen, and ink. While we still inscribe stone (tombstones, monuments) and we continue to write on skin (tattoos abound), our quotidian method of writing on paper is increasingly abandoned in favor of the quick-to-generate digital text. And even though the stone-inscribed text of epigraphy offers demonstrably better permanence than that of writing on skin and paper—even better than that of the memory system of the modern computer (Bollacker in Am Sci 98:106, 2010)—this field of study has also made the digital leap. Today’s scholarly analyses of epigraphic content increasingly rely on high-tech approaches involving data science and computer models. This essay discusses how advances in a number of exciting technologies are enabling the digital analysis of epigraphic texts and accelerating the ability of scholars to preserve, renew, and reinvigorate the study of the inscriptions that remain from throughout history.
Systematic literature reviews in educational research have become a popular research method. A key point hereby is the choice of bibliographic databases to reach a maximum probability of finding all potentially relevant literature that deals with the research question analyzed in a systematic literature review. Guidelines and handbooks on review recommend proper databases and information sources for education, along with specific search strategies. However, in many disciplines, among them educational research, there is a lack of evidence on the relevance of databases that need to be considered to find relevant literature and lessen the risk of missing relevant publications. Educational research is an interdisciplinary field and has no core database. Instead, the field is covered by multiple disciplinary and multidisciplinary information sources that have either a national or international focus. In this article, we discuss the relevance of seven databases in systematic literature reviews in education, based on results of an empirical data analysis of three recently published reviews. To evaluate the relevance of a database, the relevant literature of those reviews served as the gold standard. Results indicate that discipline-specific databases outperform international multidisciplinary sources, and a combination of discipline-specific international and national sources is most efficient in finding a high proportion of relevant literature. The article discusses the relevance of the databases in relation to their coverage of relevant literature, while considering practical implications for researchers performing a systematic literature search. We, thus, present evidence for proper database choices for educational and discipline-related systematic literature reviews.
Self-training is an effective solution for semi-supervised learning, in which both labeled and unlabeled data are leveraged for training. However, the application scenarios of existing self-training frameworks are mostly confined to single-label classification. There exist difficulties in applying self-training under multi-label scenario, since unlike single-label classification, there is no constraint of mutual exclusion over categories, and the vast number of possible label vectors makes discovery of credible predictions harder. For realizing effective self-training under multi-label scenario, we propose ML-DST and ML-DST+ that utilize contextualized document representations of pretrained language models. A BERT-based multi-label classifier and newly designed weighted loss functions for finetuning are proposed. Two label propagation-based algorithms SemLPA and SemLPA+ are also proposed to enhance multi-label prediction, whose similarity measure is iteratively improved through semantic-space finetuning, by which semantic space consisting of document representations is finetuned to better reflect learnt label correlations. High-confidence label predictions are recognized through examining the prediction score on each category separately, which are in turn used for both classifier finetuning and semantic-space finetuning. According to our experiment results, the performance of our approach steadily exceeds the representative baselines under different label rates, proving the superiority of our proposed approach.
Metadata enrichment through text mining techniques is becoming one of the most significant tasks in digital libraries. Due to the exponential increase of open access publications, several new challenges have emerged. Raw data are usually big, unstructured, and come from heterogeneous data sources. In this paper, we introduce a text analysis framework implemented in extended SQL that exploits the scalability characteristics of modern database management systems. The purpose of this framework is to provide the opportunity to build performant end-to-end text mining pipelines which include data harvesting, cleaning, processing, and text analysis at once. SQL is selected due to its declarative nature which offers fast experimentation and the ability to build APIs so that domain experts can edit text mining workflows via easy-to-use graphical interfaces. Our experimental analysis demonstrates that the proposed framework is very effective and achieves significant speedup, up to three times faster, in common use cases compared to other popular approaches.
Question-answering (QA) platforms such as Stack Overflow, Quora, and Stack Exchange have become favourite places to exchange knowledge with community users. Finding answers to simple or complex questions is easier on QA platforms nowadays. Due to a large number of responses from users all around the world, these CQA systems are currently facing massive problems. Stack Overflow allows users to ask questions and give answers or comments on others’ posts. Consequently, Stack Overflow also rewards those users whose posts are appreciated by the community in the form of reputation points. The accepted answer provides maximum reputation points to the answerer. More reputation points allow getting more website privileges. Hence, each answerer needs to get their answer accepted. Very little research has been done to check whether the user’s answers will be accepted or not. This paper proposes a model that predicts answer acceptability and its reason. The model’s findings help the answerer know about the answer acceptance; if the model predicted the probability of acceptance is less, the answerer might revise their answer immediately. The comparison with the state-of-the-art literature confirmed that the proposed model achieves better performance.
In the academic world, the number of scientists grows every year and so does the number of authors sharing the same names. Consequently, it is challenging to assign newly published papers to their respective authors. Therefore, author name ambiguity is considered a critical open problem in digital libraries. This paper proposes an author name disambiguation approach that links author names to their real-world entities by leveraging their co-authors and domain of research. To this end, we use data collected from the DBLP repository that contains more than 5 million bibliographic records authored by around 2.6 million co-authors. Our approach first groups authors who share the same last names and same first name initials. The author within each group is identified by capturing the relation with his/her co-authors and area of research, represented by the titles of the validated publications of the corresponding author. To this end, we train a neural network model that learns from the representations of the co-authors and titles. We validated the effectiveness of our approach by conducting extensive experiments on a large dataset.
Retrievability measures the influence a retrieval system has on the access to information in a given collection of items. This measure can help in making an evaluation of the search system based on which insights can be drawn. In this paper, we investigate the retrievability in an integrated search system consisting of items from various categories, particularly focussing on datasets, publications and variables in a real-life digital library. The traditional metrics, that is, the Lorenz curve and Gini coefficient, are employed to visualise the diversity in retrievability scores of the three retrievable document types (specifically datasets, publications, and variables). Our results show a significant popularity bias with certain items being retrieved more often than others. Particularly, it has been shown that certain datasets are more likely to be retrieved than other datasets in the same category. In contrast, the retrievability scores of items from the variable or publication category are more evenly distributed. We have observed that the distribution of document retrievability is more diverse for datasets as compared to publications and variables.