Noticias em eLiteracias

🔒
❌ Sobre o FreshRSS
Há novos artigos disponíveis, clique para atualizar a página.
Antes de ontemO-P

Information science’s contributions towards emerging open evaluation practices

Performance Measurement and Metrics, Volume 20, Issue 1, Page 2-16, February 2019.
Purpose The purpose of this paper is to discuss emerging practices in open evaluation, namely, the concept of co-evaluation and how research on evaluation developed within information science can contribute to enhance stakeholders and citizens’ involvement in open science. Design/methodology/approach A meta-evaluative and transdisciplinary approach – directed toward the intersection between information science, evaluation, competences management, sustainability transitions management and participatory methodologies – provided the basis for the identification and subsequent reflection on the levels of stakeholder participation embedded into ISO 16439’s (2014) methods for assessing the impact of libraries and on the domains and competences to be mobilized for (co)evaluation. The contributions of Engaged 2020 Action Catalogue, as well as several taxonomies of evaluator competences and the Council of Europe’s (2016) conceptual model of competences for a democratic culture were particularly relevant for this (re)construction process. Findings Two results of the line of research carried out since 2012 at the Faculty of Social Sciences and Humanities of the Universidade NOVA de Lisboa (Portugal) can significantly contribute to improve stakeholders’ participation in Open Science: ISO 16439’s systematization of methods and procedures for assessing the impact of libraries and the (co-)evaluation competency framework. Originality/value This paper presents the transdisciplinary concept of co-evaluation and examines the current epistemological challenges to science by analyzing the general tendency to openness through the lens of research on evaluation and participatory methods developed within information science.
  • 30 de Novembro de 2018, 02:39

Bibliometric assessment of the global research output in Jatropha curcas Linn as reflected by papers indexed in Science Citation Index-Expanded

Performance Measurement and Metrics, Volume 20, Issue 1, Page 17-26, February 2019.
Purpose The purpose of this paper is to examine the quantum of research papers and the citations these papers received for the plant Jatropha curcas Linn. Design/methodology/approach Articles published on Jatropha curcas Linn during 1987–2016 were downloaded from Science Citation Index-Expanded (SCIE) by using the keyword Jatropha* on October 18, 2017. The search resulted in 4,276 records in all. The authors analyzed only 4,111 documents which were published as review articles, research articles and proceeding papers using the complete count methodology. The data were analyzed to examine the pattern of growth of output, most prolific countries, institutions and authors. It also identified highly cited authors and journals used for communicating research results. Findings The study indicates that India, China and Brazil are the main contributors to the field and the pattern of growth indicates a steep rise in publication output especially in the last block of 2015–2016. Most of the prolific institutions and authors were also located in these countries. However, the impact of output was different from the pattern of output. The publication output is scattered in more than 1,000 journals published from different parts of the globe. Originality/value The plant of Jatropha curcas Linn is a highly useful plant as a source of biofuel energy. This is the second study in English language on this plant and has used a large set of publication data as compared to the first. The findings of the study may be useful for policy makers as well as for researchers working in the field of biofuel energy.
  • 29 de Novembro de 2018, 11:57

Factors influencing customers’ willingness to participate in virtual brand community’s value co-creation

Online Information Review, Volume 43, Issue 3, Page 440-461, June 2019.
Purpose The purpose of this paper is to explore the factors influencing customers’ willingness to participate in virtual brand community’s value co-creation and help companies better operating the virtual brand community. Design/methodology/approach Based on social cognitive theory and the features of the virtual brand community, this paper constructed a model of factors influencing customers’ willingness to participate in virtual brand community’s value co-creation. Then this paper quantitatively analyzed the mediating effect and the moderating effect. Findings The empirical analysis came to the following conclusions: first, in virtual brand communities, customers’ willingness to participate in value co-creation would be influenced by subject factors, environment factors, brand factors and perceived value factor. Second, customer involvement is an important moderator. The more involved the customer is, the more he/she will rely on the virtual brand community. Particularly, customer involvement has a positive moderating effect on the influence of subject factors, while it has a negative moderating effect on the influence of community experience and community trust. Third, perceived value plays a significant mediating role between subject factors and customers’ willingness to participate in value co-creation. Practical implications The results of this study can help companies better understand the influence of external factors like environment and brand so that they can better operate the virtual brand community and encourage customers to contribute to the development of the community and the brand. Originality/value Most of the existing studies focused on the formation of virtual brand communities and customers’ participation behaviors, but there is limited research focusing on what contributes to customers’ participation in value co-creation of virtual brand communities. This study, therefore, attempts to bridge the research gap.
  • 22 de Novembro de 2018, 02:54

What difference do data make? Data management and social change

Online Information Review, Ahead of Print.
Purpose The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists. Design/methodology/approach The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain. Findings In analysing the three case studies, the authors surface how temporal dimensions, geographic scale and sociotechnical politics influence their differing data management strategies. Originality/value The authors build upon extant literature on data management infrastructure, which primarily discusses how these practices manifest in scientific and institutional research settings, to analyse how data management infrastructure is often crucial to social movements that rely on data to surface political issues.
  • 22 de Novembro de 2018, 02:50

Investigating the brand evangelism effect of community fans on social networking sites

Online Information Review, Ahead of Print.
Purpose Many enterprises recognize that social media is a valuable source of information propagation for brands. Using the self-congruity and social identity theories as theoretical bases, the purpose of this paper is to develop an integrated conceptual model and explore the effects of brand-evangelism-related behavioral decisions of enterprises on virtual community members. Design/methodology/approach This study targeted community members who had purchased a specific cosmetic brand’s products and had been members of an official brand fan page for at least one year. Using a survey of 488 valid samples and structural equation modeling was used to conduct path analyses. Findings The results indicated that seven hypothetical paths were supported and exhibited desirable goodness of fit. Value congruity can be used to explain effects of dual identification on various relationships. Relationships among variables of brand evangelism are not independent. Specifically, the effect of brand purchase intentions on positive brand referrals is higher than that on oppositional brand referrals. Practical implications The findings can help brand community managers to adopt innovative and effective strategies to gain community members’ identification and maintain a desirable relationship between business and community members. In addition, this study should help marketers to increase the opportunity of maximizing the brand evangelism effect. Originality/value This study contributes to the understanding for multiple perspectives of value congruity and adopts the extension viewpoint to understand community members not only have brand value and self-congruity problems but also have community membership goals and values related to the fit problem.
  • 22 de Novembro de 2018, 02:41

Are computers better than smartphones for web survey responses?

Online Information Review, Volume 43, Issue 3, Page 350-368, June 2019.
Purpose The purpose of this paper is to examine the effect of smartphones and computers as web survey entry response devices on the quality of responses in different question formats and across different survey invitations delivery modes. The respondents’ preference of device and the response immediacy were also compared. Design/methodology/approach Two field experiments were conducted with a cluster sampling and a census of all students in a public university in the USA. Findings Device effect on response quality was only found when using computer-aided self-interviews, but not in e-mail delivered web surveys. Even though the computer was the preferred device, but the smartphone’s immediate response was significantly higher than the computer. Research limitations/implications The sample was restricted to college students who are more proficient users of smartphones and have high access to computers. But the direct comparison in the two studies using the same population increases the internal validity of the study comparing different web survey delivery modes. Practical implications Because of the minor differences in device on response quality, researchers can consider using more smartphones for field work such as computer-aided self-interviews to complement e-mail delivered surveys. Originality/value This is the first study that compares the response device effects of computer-aided self-interviews and e-mailed delivered web surveys. Because web surveys are increasingly used and various devices are being used to collect data, how respondents behave in different devices and the strengths and weaknesses of different methods of delivery survey help researchers to improve data quality and develop effective web survey delivery and participant recruitment.
  • 16 de Novembro de 2018, 02:33

Publishing speed and acceptance rates of open access megajournals

Online Information Review, Ahead of Print.
Purpose The purpose of this paper is to look at two particular aspects of open access megajournals, a new type of scholarly journals. Such journals only review for scientific soundness and leave the judgment of scientific impact to the readers. The two leading journals currently each publish more than 20,000 articles per year. The publishing speed of such journals and acceptance rates of such journals are the topics of the study. Design/methodology/approach Submission, acceptance and publication dates for a sample of articles in 12 megajournals were manually extracted from the articles. Information about acceptance rates was obtained using web searches of journal home pages, editorials, blogs, etc. Findings The time from submission to publication varies a lot, with engineering megajournals publishing much more rapidly. But on average it takes almost half a year to get published, particularly in the high-volume biomedical journals. As some of the journals have grown in publication volume, the average review time has increased by almost two months. Acceptance rates have slightly decreased over the past five years, and are now in the range of 50–55 percent. Originality/value This is the first empirical study of how long it takes to get published in megajournals and it highlights a clear increase of around two months in publishing. Currently, the review process in the biomedical megajournals takes as long as in regular more selective journals in the same fields. Possible explanations could be increasing difficulties in finding willing and motivated reviewers and in a higher share of submissions from developing countries.
  • 12 de Novembro de 2018, 03:07

A corpus of debunked and verified user-generated videos

Online Information Review, Volume 43, Issue 1, Page 72-88, February 2019.
Purpose As user-generated content (UGC) is entering the news cycle alongside content captured by news professionals, it is important to detect misleading content as early as possible and avoid disseminating it. The purpose of this paper is to present an annotated dataset of 380 user-generated videos (UGVs), 200 debunked and 180 verified, along with 5,195 near-duplicate reposted versions of them, and a set of automatic verification experiments aimed to serve as a baseline for future comparisons. Design/methodology/approach The dataset was formed using a systematic process combining text search and near-duplicate video retrieval, followed by manual annotation using a set of journalism-inspired guidelines. Following the formation of the dataset, the automatic verification step was carried out using machine learning over a set of well-established features. Findings Analysis of the dataset shows distinctive patterns in the spread of verified vs debunked videos, and the application of state-of-the-art machine learning models shows that the dataset poses a particularly challenging problem to automatic methods. Research limitations/implications Practical limitations constrained the current collection to three platforms: YouTube, Facebook and Twitter. Furthermore, there exists a wealth of information that can be drawn from the dataset analysis, which goes beyond the constraints of a single paper. Extension to other platforms and further analysis will be the object of subsequent research. Practical implications The dataset analysis indicates directions for future automatic video verification algorithms, and the dataset itself provides a challenging benchmark. Social implications Having a carefully collected and labelled dataset of debunked and verified videos is an important resource both for developing effective disinformation-countering tools and for supporting media literacy activities. Originality/value Besides its importance as a unique benchmark for research in automatic verification, the analysis also allows a glimpse into the dissemination patterns of UGC, and possible telltale differences between fake and real content.
  • 12 de Novembro de 2018, 03:04

Context-aware restricted Boltzmann machine meets collaborative filtering

Online Information Review, Ahead of Print.
Purpose The purpose of this paper is to propose an approach to incorporate contextual information into collaborative filtering (CF) based on the restricted Boltzmann machine (RBM) and deep belief networks (DBNs). Traditionally, neither the RBM nor its derivative model has been applied to modeling contextual information. In this work, the authors analyze the RBM and explore how to utilize a user’s occupation information to enhance recommendation accuracy. Design/methodology/approach The proposed approach is based on the RBM. The authors employ user occupation information as a context to design a context-aware RBM and stack the context-aware RBM to construct DBNs for recommendations. Findings The experiments on the MovieLens data sets show that the user occupation-aware RBM outperforms other CF models, and combinations of different context-aware models by mutual information can obtain better accuracy. Moreover, the context-aware DBNs model is superior to baseline methods, indicating that deep networks have more qualifications for extracting preference features. Originality/value To improve recommendation accuracy through modeling contextual information, the authors propose context-aware CF approaches based on the RBM. Additionally, the authors attempt to introduce hybrid weights based on information entropy to combine context-aware models. Furthermore, the authors stack the RBM to construct a context-aware multilayer network model. The results of the experiments not only convey that the context-aware RBM has potential in terms of contextual information but also demonstrate that the combination method, the hybrid recommendation and the multilayer neural network extension have significant benefits for the recommendation quality.
  • 12 de Novembro de 2018, 03:01

An exploratory approach to the computational quantification of journalistic values

Online Information Review, Volume 43, Issue 1, Page 133-148, February 2019.
Purpose News algorithms not only help the authors to efficiently navigate the sea of available information, but also frame information in ways that influence public discourse and citizenship. Indeed, the likelihood that readers will be exposed to and read given news articles is structured into news algorithms. Thus, ensuring that news algorithms uphold journalistic values is crucial. In this regard, the purpose of this paper is to quantify journalistic values to make them readable by algorithms through taking an exploratory approach to a question that has not been previously investigated. Design/methodology/approach The author matched the textual indices (extracted from natural language processing/automated content analysis) with human conceptions of journalistic values (derived from survey analysis) by implementing partial least squares path modeling. Findings The results suggest that the numbers of words or quotes news articles contain have a strong association with the survey respondent assessments of their balance, diversity, importance and factuality. Linguistic polarization was an inverse indicator of respondents’ perception of balance, diversity and importance. While linguistic intensity was useful for gauging respondents’ perception of sensationalism, it was an ineffective indicator of importance and factuality. The numbers of adverbs and adjectives were useful for estimating respondents’ perceptions of factuality and sensationalism. In addition, the greater numbers of quotes, pair quotes and exclamation/question marks in news headlines were associated with respondents’ perception of lower journalistic values. The author also found that the assessment of journalistic values influences the perception of news credibility. Research limitations/implications This study has implications for computational journalism, credibility research and news algorithm development. Originality/value It represents the first attempt to quantify human conceptions of journalistic values with textual indices.
  • 12 de Novembro de 2018, 02:57

Location impact on source and linguistic features for information credibility of social media

Online Information Review, Volume 43, Issue 1, Page 89-112, February 2019.
Purpose Social media platforms provide a source of information about events. However, this information may not be credible, and the distance between an information source and the event may impact on that credibility. Therefore, the purpose of this paper is to address an understanding of the relationship between sources, physical distance from that event and the impact on credibility in social media. Design/methodology/approach In this paper, the authors focus on the impact of location on the distribution of content sources (informativeness and source) for different events, and identify the semantic features of the sources and the content of different credibility levels. Findings The study found that source location impacts on the number of sources across different events. Location also impacts on the proportion of semantic features in social media content. Research limitations/implications This study illustrated the influence of location on credibility in social media. The study provided an overview of the relationship between content types including semantic features, the source and event locations. However, the authors will include the findings of this study to build the credibility model in the future research. Practical implications The results of this study provide a new understanding of reasons behind the overestimation problem in current credibility models when applied to different domains: such models need to be trained on data from the same place of event, as that can make the model more stable. Originality/value This study investigates several events – including crisis, politics and entertainment – with steady methodology. This gives new insights about the distribution of sources, credibility and other information types within and outside the country of an event. Also, this study used the power of location to find alternative approaches to assess credibility in social media.
  • 12 de Novembro de 2018, 02:54

Event news detection and citizens community structure for disaster management in social networks

Online Information Review, Volume 43, Issue 1, Page 113-132, February 2019.
Purpose Nowadays, the event detection is so important in gathering news from social media. Indeed, it is widely employed by journalists to generate early alerts of reported stories. In order to incorporate available data on social media into a news story, journalists must manually process, compile and verify the news content within a very short time span. Despite its utility and importance, this process is time-consuming and labor-intensive for media organizations. Because of the afore-mentioned reason and as social media provides an essential source of data used as a support for professional journalists, the purpose of this paper is to propose the citizen clustering technique which allows the community of journalists and media professionals to document news during crises. Design/methodology/approach The authors develop, in this study, an approach for natural hazard events news detection and danger citizen’ groups clustering based on three major steps. In the first stage, the authors present a pipeline of several natural language processing tasks: event trigger detection, applied to recuperate potential event triggers; named entity recognition, used for the detection and recognition of event participants related to the extracted event triggers; and, ultimately, a dependency analysis between all the extracted data. Analyzing the ambiguity and the vagueness of similarity of news plays a key role in event detection. This issue was ignored in traditional event detection techniques. To this end, in the second step of our approach, the authors apply fuzzy sets techniques on these extracted events to enhance the clustering quality and remove the vagueness of the extracted information. Then, the defined degree of citizens’ danger is injected as input to the introduced citizens clustering method in order to detect citizens’ communities with close disaster degrees. Findings Empirical results indicate that homogeneous and compact citizen’ clusters can be detected using the suggested event detection method. It can also be observed that event news can be analyzed efficiently using the fuzzy theory. In addition, the proposed visualization process plays a crucial role in data journalism, as it is used to analyze event news, as well as in the final presentation of detected danger citizens’ clusters. Originality/value The introduced citizens clustering method is profitable for journalists and editors to better judge the veracity of social media content, navigate the overwhelming, identify eyewitnesses and contextualize the event. The empirical analysis results illustrate the efficiency of the developed method for both real and artificial networks.
  • 8 de Novembro de 2018, 10:25

Are mega-journals a publication outlet for lower quality research? A bibliometric analysis of Spanish authors in PLOS ONE

Online Information Review, Ahead of Print.
Purpose Open-access mega-journals (OAMJs), which apply a peer-review policy based solely on scientific soundness, elicit opposing views. Sceptical authors believe that OAMJs are simply an easy target to publish uninteresting papers that would not be accepted in more selective traditional journals. The purpose of this paper is to investigate any differences in scholars’ considerations of OAMJs by analysing the productivity and impact of Spanish authors in Biology and Medicine who publish in PLOS ONE. Design/methodology/approach Scopus was used to identify the most prolific Spanish authors in Biology and Medicine between 2013 and 2017 and to determine their publication patterns in PLOS ONE. Any differences in terms of citation impact between Spanish authors who publish frequently in PLOS ONE and the global Spanish output in Biology and Medicine were measured. Findings Results show a moderate correlation between the total number of articles published by prolific authors in Biology and Medicine and the number of articles they publish in PLOS ONE. Authors who publish frequently in PLOS ONE tend to publish more frequently than average in Quartile 1 and Top 10 per cent impact journals and their articles are more frequently cited than average too, suggesting that they do not submit to PLOS ONE for the purpose of gaining easier publication in a high-impact journal. Research limitations/implications The study is limited to one country, one OAMJ and one discipline and does not investigate whether authors select PLOS ONE for what they might regard as their lower quality research. Originality/value Very few studies have empirically addressed the implications of the soundness-based peer-review policy applied by OAMJs.
  • 5 de Novembro de 2018, 08:49

Social media analytics: analysis and visualisation of news diffusion using NodeXL

Online Information Review, Volume 43, Issue 1, Page 149-160, February 2019.
Purpose The purpose of this paper is to provide an overview of NodeXL in the context of news diffusion. Journalists often include a social media dimension in their stories but lack the tools to get digital photos of the virtual crowds about which they write. NodeXL is an easy to use tool for collecting, analysing, visualising and reporting on the patterns found in collections of connections in streams of social media. With a network map patterns emerge that highlight key people, groups, divisions and bridges, themes and related resources. Design/methodology/approach This study conducts a literature review of previous empirical work which has utilised NodeXL and highlights the potential of NodeXL to provide network insights of virtual crowds during emerging news events. It then develops a number of guidelines which can be utilised by news media teams to measure and map information diffusion during emerging news events. Findings One emergent software application known as NodeXL has allowed journalists to take “group photos” of the connections among a group of users on social media. It was found that a diverse range of disciplines utilise NodeXL in academic research. Furthermore, based on the features of NodeXL, a number of guidelines were developed which provide insight into how to measure and map emerging news events on Twitter. Social implications With a set of social media network images a journalist can cover a set of social media content streams and quickly grasp “situational awareness” of the shape of the crowd. Since social media popular support is often cited but not documented, NodeXL social media network maps can help journalists quickly document the social landscape utilising an innovative approach. Originality/value This is the first empirical study to review literature on NodeXL, and to provide insight into the value of network visualisations and analytics for the news media domain. Moreover, it is the first empirical study to develop guidelines that will act as a valuable resource for newsrooms looking to acquire insight into emerging news events from the stream of social media posts. In the era of fake news and automated accounts, i.e., bots the ability to highlight opinion leaders and ascertain their allegiances will be of importance in today’s news climate.
  • 24 de Outubro de 2018, 07:30

Phones, privacy, and predictions

Online Information Review, Ahead of Print.
Purpose Mobile phones have become one of the most favored devices to maintain social connections as well as logging digital information about personal lives. The privacy of the metadata being generated in this process has been a topic of intense debate over the last few years, but most of the debate has been focused on stonewalling such data. At the same time, such metadata is already being used to automatically infer a user’s preferences for commercial products, media, or political agencies. The purpose of this paper is to understand the predictive power of phone usage features on individual privacy attitudes. Design/methodology/approach The present study uses a mixed-method approach, involving analysis of mobile phone metadata, self-reported survey on privacy attitudes and semi-structured interviews. This paper analyzes the interconnections between user’s social and behavioral data as obtained via their phone with their self-reported privacy attitudes and interprets them based on the semi-structured interviews. Findings The findings from the study suggest that an analysis of mobile phone metadata reveals vital clues to a person’s privacy attitudes. This study finds that multiple phone signals have significant predictive power on an individual’s privacy attitudes. The results motivate a newer direction of automatically inferring a user’s privacy attitudes by leveraging their phone usage information. Practical implications An ability to automatically infer a user’s privacy attitudes could allow users to utilize their own phone metadata to get automatic recommendations for privacy settings appropriate for them. This study offers information scientists, government agencies and mobile app developers, an understanding of user privacy needs, helping them create apps that take these traits into account. Originality/value The primary value of this paper lies in providing a better understanding of the predictive power of phone usage features on individual privacy attitudes.
  • 24 de Outubro de 2018, 07:26

Investigating first-generation students’ perceptions of library personnel

Performance Measurement and Metrics, Volume 20, Issue 1, Page 27-36, February 2019.
Purpose The purpose of this paper is to investigate the perceived role of library personnel in supporting first-generation students at Penn State University Libraries, and also how students’ perceptions of library personnel change over time, and the various experiences that influenced their changes in perception. Design/methodology/approach This study employed focus groups to solicit input from first-generation students. A four-step team-based approach to qualitative coding process was developed including the development of a codebook informed by common themes and concepts drawn from the literature. Findings Findings indicate that operating from a deficit of library cultural capital often results in low awareness of available services and changes in perception are more influenced by personal exploration than limited interactions with personnel. Further, while currently employed interventions are well targeted, opportunities exist for enhancing efforts. Research limitations/implications As this is a case study, the findings are not generalizable. Per conducting only four focus groups, the experiences of participants may not represent the typical scope of personnel-related interactions. Originality/value This study adds to the limited body of evidence that first-generation students’ struggle from a deficit of library-related cultural capital.
  • 18 de Outubro de 2018, 09:50

Profile reliability to improve recommendation in social-learning context

Online Information Review, Ahead of Print.
Purpose Generally, the user requires customized information reflecting his/her current needs and interests that are stored in his/her profile. There are many sources which may provide beneficial information to enrich the user’s interests such as his/her social network for recommendation purposes. The proposed approach rests basically on predicting the reliability of the users’ profiles which may contain conflictual interests. The paper aims to discuss this issue. Design/methodology/approach This approach handles conflicts by detecting the reliability of neighbors’ profiles of a user. The authors consider that these profiles are dependent on one another as they may contain interests that are enriched from non-reliable profiles. The dependency relationship is determined between profiles, each of which contains interests that are structured based on k-means algorithm. This structure takes into consideration not only the evolutionary aspect of interests but also their semantic relationships. Findings The proposed approach was validated in a social-learning context as evaluations were conducted on learners who are members of Moodle e-learning system and Delicious social network. The quality of the created interest structure is assessed. Then, the result of the profile reliability is evaluated. The obtained results are satisfactory. These results could promote recommendation systems as the selection of interests that are considered of enrichment depends on the reliability of the profiles where they are stored. Research limitations/implications Some specific limitations are recorded. As the quality of the created interest structure would evolve in order to improve the profile reliability result. In addition, as Delicious is used as a main data source for the learner’s interest enrichment, it was necessary to obtain interests from other sources, such as e-recruitement systems. Originality/value This research is among the pioneer papers to combine the semantic as well as the hierarchical structure of interests and conflict resolution based on a profile reliability approach.
  • 18 de Outubro de 2018, 01:33

Researchers’ online visibility: tensions of visibility, trust and reputation

Online Information Review, Volume 43, Issue 3, Page 426-439, June 2019.
Purpose The purpose of this paper is to understand what role researchers assign to online representations on the new digital communication sites that have emerged, such as Academia, ResearchGate or Mendeley. How are researchers’ online presentations created, managed, accessed and, more generally, viewed by academic researchers themselves? And how are expectations of the academic reward system navigated and re-shaped in response to the possibilities afforded by social media and other digital tools? Design/methodology/approach Focus groups have been used for empirical investigation to learn about the role online representation is assigned by the concerned researchers. Findings The study shows that traditional scholarly communication documents are what also scaffolds trust and builds reputation in the new setting. In this sense, the new social network sites reinforce rather than challenge the importance of formal publications. Originality/value An understanding of the different ways in which researchers fathom the complex connection between reputation and trust in relation to online visibility as a measure of, or at least an attempt at, publicity (either within academia or outside it) is essential. This paper emphasizes the need to tell different stories by exploring how researchers understand their own practices and reasons for them.
  • 15 de Outubro de 2018, 07:17

A bibliometric analysis of event detection in social media

Online Information Review, Volume 43, Issue 1, Page 29-52, February 2019.
Purpose The purpose of this paper is to explore the research status and development trend of the field of event detection in social media (ED in SM) through a bibliometric analysis of academic publications. Design/methodology/approach First, publication distributions are analyzed including the trends of publications and citations, subject distribution, predominant journals, affiliations, authors, etc. Second, an indicator of collaboration degree is used to measure scientific connective relations from different perspectives. A network analysis method is then applied to reveal scientific collaboration relations. Furthermore, based on keyword co-occurrence analysis, major research themes and their evolutions throughout time span are discovered. Finally, a network analysis method is applied to visualize the analysis results. Findings The area of ED in SM has received increasing attention and interest in academia with Computer Science and Engineering as two major research subjects. The USA and China contribute the most to the area development. Affiliations and authors tend to collaborate more with those within the same country. Among the 14 identified research themes, newly emerged themes such as Pharmacovigilance event detection are discovered. Originality/value This study is the first to comprehensively illustrate the research status of ED in SM by conducting a bibliometric analysis. Up-to-date findings are reported, which can help relevant researchers understand the research trend, seek scientific collaborators and optimize research topic choices.
  • 15 de Outubro de 2018, 07:13

Extended model of online privacy concern: what drives consumers’ decisions?

Online Information Review, Ahead of Print.
Purpose The purpose of this paper is to investigate the relationship between individual and societal determinants of online privacy concern (OPC) and behavioral intention of internet users. The study also aims to assess the degree of reciprocity between consumers’ perceived benefits of using the internet and their OPC in the context of their decision-making process in the online environment. Design/methodology/approach The study proposes comprehensive model for analysis of antecedents and consequences of OPC. Empirical analysis is performed using the PLS–SEM approach on a representative sample of 2,060 internet users. Findings The findings show that computer anxiety and perceived quality of regulatory framework are significant antecedents of OPC, while traditional values and inclinations toward security, family and social order; and social trust are not. Furthermore, the study reveals that perceived benefits of using the internet are the predominant factor explaining the intention to share personal information and adopt new technologies, while OPC dominates in explanation of protective behavior. Research limitations/implications Although the authors tested an extended model, there might be other individual characteristics driving the level of OPC. This research covers just one country and further replications should be conducted to confirm findings in diverse socio-economic contexts. It is impossible to capture the real behavior with survey data, and experimental studies may be needed to verify the research model. Practical implications Managers should work toward maximizing perceived benefits of consumers’ online interaction with the company, while at the same time being transparent about the gathered data and their intended purpose. Considering the latter, companies should clearly communicate their compliance with the emerging new data protection regulation. Originality/value New extended model is developed and empirically tested, consolidating current different streams of research into one conceptual model.
  • 11 de Outubro de 2018, 09:39

What the fake? Assessing the extent of networked political spamming and bots in the propagation of #fakenews on Twitter

Online Information Review, Volume 43, Issue 1, Page 53-71, February 2019.
Purpose The purpose of this paper is to examine one of the largest data sets on the hashtag use of #fakenews that comprises over 14m tweets sent by more than 2.4m users. Design/methodology/approach Tweets referencing the hashtag (#fakenews) were collected for a period of over one year from January 3 to May 7 of 2018. Bot detection tools were employed, and the most retweeted posts, most mentions and most hashtags as well as the top 50 most active users in terms of the frequency of their tweets were analyzed. Findings The majority of the top 50 Twitter users are more likely to be automated bots, while certain users’ posts like that are sent by President Donald Trump dominate the most retweeted posts that always associate mainstream media with fake news. The most used words and hashtags show that major news organizations are frequently referenced with a focus on CNN that is often mentioned in negative ways. Research limitations/implications The research study is limited to the examination of Twitter data, while ethnographic methods like interviews or surveys are further needed to complement these findings. Though the data reported here do not prove direct effects, the implications of the research provide a vital framework for assessing and diagnosing the networked spammers and main actors that have been pivotal in shaping discourses around fake news on social media. These discourses, which are sometimes assisted by bots, can create a potential influence on audiences and their trust in mainstream media and understanding of what fake news is. Originality/value This paper offers results on one of the first empirical research studies on the propagation of fake news discourse on social media by shedding light on the most active Twitter users who discuss and mention the term “#fakenews” in connection to other news organizations, parties and related figures.
  • 11 de Outubro de 2018, 09:37

Barriers and solutions to assessing digital library reuse: preliminary findings

Performance Measurement and Metrics, Volume 19, Issue 3, Page 130-141, November 2018.
Purpose The purpose of this paper is to highlight the initial top-level findings of a year-long comprehensive needs assessment, conducted with the digital library community, to reveal reuse assessment practices and requirements for digital assets held by cultural heritage and research organizations. The type of assessment examined is in contrast to traditional library analytics, and does not focus on access statistics, but rather on how users utilize and transform unique materials from digital collections. Design/methodology/approach This paper takes a variety of investigative approaches to explore the current landscape, and future needs, of digital library reuse assessment. This includes the development and analysis of pre- and post-study surveys, in-person and virtual focus group sessions, a literature review, and the incorporation of community and advisory board feedback. Findings The digital library community is searching for ways to better understand how materials are reused and repurposed. This paper shares the initial quantitative and qualitative analysis and results of a community needs assessment conducted in 2017 and 2018 that illuminates the current and hoped for landscape of digital library reuse assessment, its strengths, weaknesses and community applications. Originality/value In so far as the authors are aware, this is the first paper to examine with a broad lens the reuse assessment needs of the digital library community. The preliminary analysis and initial findings have not been previously published.
  • 4 de Outubro de 2018, 02:54

Subject analysis of LIS data archived in a Figshare using co-occurrence analysis

Online Information Review, Volume 43, Issue 2, Page 256-264, April 2019.
Purpose Based on the data from Figshare repositories, the purpose of this paper is to analyze which research data are actively produced and shared in the interdisciplinary field of library and information science (LIS). Design/methodology/approach Co-occurrence analysis was performed on keywords assigned to research data in the field of LIS, which were archived in the Figshare repository. By analyzing the keyword network using the pathfinder algorithm, the study identifies key areas where data production is actively conducted in LIS, and examines how these results differ from the conventional intellectual structure of LIS based on co-citation or bibliographic coupling analysis. Findings Four major domains – Open Access, Scholarly Communication, Data Science and Informatics – and 15 sub-domains were created. The keywords with the highest global influence appeared as follows, in descending order: “open access,” “scholarly communication” and “altmetrics.” Originality/value This is the first study to understand the key areas that actively produce and utilize data in the LIS field.
  • 2 de Outubro de 2018, 02:49

A meta-analysis of service quality of Iranian university libraries based on the LibQUAL model

Performance Measurement and Metrics, Volume 19, Issue 3, Page 186-202, November 2018.
Purpose The purpose of this paper is to assess the quality of Iranian university libraries. Design/methodology/approach This first systematic review and meta-analysis were based on the PRISMA guidelines by searching in national and international databases from 2003 to January 2017 with standard Persian and English keywords. Data searching, extracting and quality appraising were completed by two researchers, independently. Any unexpected documents were assessed by a third expert researcher. Data were extracted in accordance with the “Strength of the Reporting of Observational Studies in Epidemiology” checklist after the final selection of appraised documents. Random effects size based on Cochrane test and I2 were used for combining the obtained results from different studies together by considering the heterogeneity of studies. Findings Based on the meta-analysis conducted in 25 (6.42 percent) included studies, the total sample size was estimated. According to three dimensions of LibQUAL, findings of current information control, affect of service and the library as a place were estimated as 5.37 [CI95%: 5.02, 5.73], 6.91 [CI95%: 5.56, 6.26], and 5.46 percent [CI95%: 5.2, 5.73], respectively. Also, mean of service adequacy and superiority gap are equal to 0.07 [CI95%: −0.22, 0.36] and −2.06 [CI95%: −2.89, −1.23], respectively. There was a significant correlation between three dimensions of service quality and service superiority gap of LibQUAL and geographical regions of Iran (p<0.01). Also, a significant correlation was found between the gaps of services and three aspects of LibQUAL model and published years through a meta-regression test (p<0.01). Practical implications The results obtained from the present study showed that users are relatively satisfied with the quality of services provided by Iranian university libraries. An improvement in the quality of library services can promote the scientific level of universities. Originality/value The results of the present systematic review and meta-analysis study demonstrate a vital connection between primary research studies and decision-making for policymakers in Iranian university libraries to increase quality services.
  • 26 de Setembro de 2018, 07:31

“Warning! You’re entering a sick zone”

Online Information Review, Ahead of Print.
Purpose Traditional public health methods for tracking contagious diseases are increasingly complemented with digital tools, which use data mining, analytics and crowdsourcing to predict disease outbreaks. In recent years, alongside these public health tools, commercial mobile apps such as Sickweather have also been released. Sickweather collects information from across the web, as well as self-reports from users, so that people can see who is sick in their neighborhood. The purpose of this paper is to examine the privacy and surveillance implications of digital disease tracking tools. Design/methodology/approach The author performed a content and platform analysis of two apps, Sickweather and HealthMap, by using them for three months, taking regular screenshots and keeping a detailed user journal. This analysis was guided by the walkthrough method and a cultural-historical activity theory framework, taking note of imagery and other content, but also the app functionalities, including characteristics of membership, “rules” and parameters of community mobilization and engagement, monetization and moderation. This allowed me to study HealthMap and Sickweather as modes of governance that allow for (and depend upon) certain actions and particular activity systems. Findings Draw on concepts of network power, the surveillance assemblage, and Deleuze’s control societies, as well as the data gathered from the content and platform analysis, the author argues that disease tracking apps construct disease threat as omnipresent and urgent, compelling users to submit personal information – including sensitive health data – with little oversight or regulation. Originality/value Disease tracking mobile apps are growing in popularity yet have received little attention, particularly regarding privacy concerns or the construction of disease risk.
  • 19 de Setembro de 2018, 12:38
❌