REVIEW ARTICLE


https://doi.org/10.5005/jp-journals-11005-0017
Science, Art and Religion
Volume 1 | Issue 1 | Year 2022

Scientometrics: The Imperative for Scientific Validity of the Scientific Publications Content


Izet Masic

Faculty of Medicine, University of Sarajevo; Academy of Medical Sciences of Bosnia and Herzegovina; International Academy of Sciences and Arts in Bosnia and Herzegovina, Sarajevo, Bosnia and Herzegovina

Corresponding Author: Izet Masic, Faculty of Medicine, University of Sarajevo; Academy of Medical Sciences of Bosnia and Herzegovina; International Academy of Sciences and Arts in Bosnia and Herzegovina, Sarajevo, Bosnia and Herzegovina, e-mail: izetmasic@gmail.com

ABSTRACT

Background: Scientometry is a part of Scientology (the science of science) that analyzes scientific articles and their citation in a selected sample of scientific journals. The basic part of scientometry is bibliometrics, which was introduced in the 1970s to mark quantitative research on communication processes by applying appropriate mathematical and statistical methods to published publications. Scientific research is the only real way and method for the proliferation of true knowledge in all spheres of science, but also in academic institutions. The ability to study a scientific problem is the highest level of knowledge. Medical, and in a broader sense biomedical scientific research, is a process of systematic research of current and important health problems related to defined aspects of physical, mental, or social well-being of the population of local, regional or global character.

Objective/Aim: This article aims to present the current tools available in scientometry for the evaluation of the scientific validity of published articles and explain the purpose.

Materials and Methods: The author searched the most influential online databases and analyzed deposited papers on the topic of scientometrics and used the descriptive method of reviewing important facts about experiences with scientometrics in the scientific and academic practice.

Results and discussion: Researchers in medical research examine biological, socioeconomic, and environmental factors in which we live and work, which affect health and contribute to illness, disability, or death. The most important satisfaction for any scientist should be the realization that the result of research in a certain way in the future will affect at least one person to be healthier, which should be fundamental to the realization of research in practice—at universities or specialized scientific laboratories and institutes. The format of scientific articles can vary greatly from journal to journal. Nevertheless, many of them follow the IMRAD scheme, recommended by the International Committee of Medical Journal Editors (ICMJE), or the BOMRAD form, recommended by the author of this article. Scientometrics analyzes scientific articles and their citation in a selected sample of scientific journals. Bibliometrics denotes quantitative research of communication processes by applying appropriate mathematical and statistical methods to books and other communication media. Bibliometric methods are used for quantitative analysis of written materials. Citation provides guidelines for scientific work because it stimulates scientists to deal with the most current areas of research, organizes scientific article at the world level, or shapes and directs it. Citation is influenced by: article quality, understanding of the article, language in which the article is written, loyalty to a group of researchers, article type, etc. Some of the indicators used in the evaluation of scientific work are Impact factor (IF); Citation of the article; Journal citations; Number and order of authors, etc. The impact factor is the number of citations of articles published in the journal during the previous two years divided by the total number of articles published in the journal during the same period. The factor of influence depends on the quality of the journal, the language in which it was printed, the area it covers, and the journal distribution system. In this article, we pointed out that the h-Index presents one of a set of valuable measures to determine scientific excellence (bibliometrics recognize also m-value as useful). Although the Hirsch index (h-Index) is a better measure than a citation impact factor (IF), it is still based on the opinions of other authors.

Conclusion: Since research in medicine can affect the improvement of clinical and public health practices, it is necessary to conduct them. Only quality research with exact results offers the scientific community new information about the examined problem, the researcher’s satisfaction, the possibility of communicating and conducting scientific dialogue with other members of the academic community, and opening opportunities to receive a critical review of those who have insight into the research.

How to cite this article: Masic I. Scientometrics: The Imperative for Scientific Validity of the Scientific Publications Content. Sci Arts Relig 2022;1(1):56-80.

Source of support: Nil

Conflict of interest: None

SAŽETAK

Uvod:

Scientometrija je dio Scientologije (nauka o nauci) koja analizira naučne radove i njihovu citiranost u odabranom uzorku naučnih časopisa. Bazični dio u scientometriji je bibliometrija, koja se sedamdesetih godina uvela kako bi se označila kvantitativna istraživanja komunikacijskih procesa primjenom odgovarajućih matematičkih i statističkih metoda na objavljene publikacije. Naučno istraživanje je jedini pravi način i metod za proliferaciju istinskog znanja u svim sferama nauke, ali i akademskim institucijama. Sposobnost da se istraži neki naučni problem predstavlja najviši stepen znanja. Medicinsko, i u širem smislu biomedicinsko naučno istraživanje, predstavlja proces sistematičnog istraživanja aktuelnih i važnih zdravstvenih problema koji se odnose na definirane aspekte fizičkog, psihičkog ili socijalnog blagostanja populacije lokalnog, regionalnog ili globalnog karaktera.

Cilj:

Cilj ovog rada je predstaviti aktualne alate dostupne u scientometriji za ocjenu naučne valjanosti objavljenih članaka, kao i objasniti njihovu svrhu.

Metod(ologija):

Autor je pretraživao najuticajnije on-line baze podataka i analizirao pohranjene naučne radove na temu scientometrije te koristio deskriptivnu metodu sagledavanja važnih činjenica o iskustvima sa upotrebom scientometrije u naučnom i akademskom radu.

Rezultati i diskusija:

Istraživači u medicinskim istraživanjima ispituju biološke, socioekonomske i faktore okoline u kojoj živimo i radimo, a koji utiču na zdravlje i doprinose nastanku bolesti, nesposobnosti ili smrti. Najvažnija satisfakcija za svakog naučnika trebala biti spoznaja da će rezultat istraživanja na određeni način u budućnosti uticati barem na jednu osobu da bude zdravija, sto bi u suštini trebao biti temeljni zbog kojeg se realiziraju istraživanja u praksi–na univerzitetima ili specijaliziranim naučnim laboratorijama i institutima. Oblik znanstvenih članaka može se jako razlikovati od časopisa do časopisa. Unatoč tome, mnogi od njih slijede IMRAD shemu, koju je preporučio «International Committee of Medical Journal Editors» (ICMJE) ili BOMRAD shemu koju preporučuje autor ovog članka. Scientometrija analizira naučne radove i njihove citate u naučnom časopisu koristeći odabrani uzorak. Pojam bibliometrija označava kvantitativna istraživanja komunikacijskih procesa primjenom odgovarajućih matematičkih i statističkih metoda na knjige i druge medije komunikacije. Bibliometrijske metode se koriste za kvantitativnu analizu pisanih materijala. Citiranost daje smjernice naučnom radu jer potiče naučnike da se bave najaktualnijim područjima istraživanja, te organizira naučni rad na svjetskom nivou, on ga usmjerava i daju mu adekvatnu i praktički prihvatljivu formu. Na citiranost utiču: kvalitet rada, razumijevanje rada, jezik na kome je rad napisan, lojalnost nekoj grupi istraživača, vrsta rada, itd. Neki od indikatora koji se koriste u ocjenjivanju naučnog rada su: Faktor utjecaja (Impact Factor - IF); citiranost članka; Citati časopisa; broj i redoslijed autora itd. Faktor utjecaja predstavlja broj citata članaka objavljenih u časopisu tokom prethodne dvije godine podijeljen s ukupnim brojem članaka objavljenih u časopisu u istom razdoblju. Faktor utjecaja ovisi o: kvaliteti časopisa, jeziku na kojem je objavljen, području koje pokriva, sistemu distribucije časopisa. Iako je h-indeks bolja mjera od faktora utjecaja citata (IF), ipak se temelji na mišljenjima drugih autora.

Zaključak:

S obzirom da istraživanja u medicini mogu uticati na poboljšanje kliničke i javnozdravstvene prakse, potrebno ih je provoditi. Samo kvalitetno urađeno istraživanje sa egzaktnim rezultatima nudi naučnoj zajednici nove informacije o ispitivanom problemu, a samom istraživaču osobno zadovoljstvo, mogućnost komuniciranja i vođenja naučnog dijaloga sa ostalim članovima akademske zajednice, te otvaranje mogućnosti za primanje kritičkog osvrta onih koji imaju uvid u samo istraživanje.

Keywords: Citation, Google Scholar Index, H-Index, IF, Scientific publications, Scientometry, Validity

Ključne riječi Scientometrija, validnost, naučne publikacije, citiranje, Indeks factor, h-Indeks, Google Scholar Indeks

BACKGROUND

Scientific research is the only real way and method for the proliferation of true knowledge in all spheres of science, but also in academic institutions.1-8 The ability to study a scientific problem is the highest level of knowledge.9-16 Medical, and in a broader sense biomedical scientific research, is a process of systematic research of current and important health problems related to defined aspects of physical, mental, or social well-being of the population of local, regional, or global character.17-22

The current global problem of the COVID-19 virus pandemic shows the importance of such an approach in solving an extremely important public health problem whose consequences are almost catastrophic and affect other sectors important for the life and work of the population globally. On the other hand, works that include clinical and public health research belong to the category of research at the level of a limited part of the population living with appropriate risks for certain diseases and conditions related to characteristic age and risk groups of the population. The research process itself can be extremely exciting for researchers because it is not only the results of the work that are important but also the research itself, then, involvement in a health or social problem, research of the unknown, and revealing questions to previously asked, insufficiently clear and scientifically answered questions. It is important that the research project, which is implemented and approved by the appropriate experts and institutions, contains identical elements to previously written and published articles. Whether the research is conducted by a student, postgraduate or university professor, each research must contain defined steps, namely: identifying the problems to be researched, collecting data, analyzing the evidence gathered and reaching a conclusion, and presenting them publicly at conferences or publishing them in appropriate scientific or professional journals or other types of publications.1,23-26

Like the fact that today is conducted several scientific research in the field of medicine, it is necessary to define the steps by which it is carried out to make it universal and have a scientific value.3-6 This paper describes the research methods, study design, how one should be written, and why it is important to publicize the same. Special emphasis is placed on scientometrics as the science that evaluates scientific papers and their citation in the selected sample of journals. The paper also answers why scientific research should be carried out and what kind of satisfaction they provide to the researcher.

PURPOSE AND PROCESS OF MEDICAL RESEARCH

Researchers in medical research examines biological, socioeconomic, and environmental factors in which we live and work, which affect health and contribute to illness, disability, or death. Research at the level of the population (so-called global character, as it is currently being done) has defined its goals, among which dominate:2,27-29

The researcher’s idea that he will get rich or become famous after writing a scientific article is almost a utopia. In general, a long period passes in practice, which can be measured in years until the initial idea of the research leads to the final result of the research, which ends with certain conclusions and recommendations for application in practice. This was best demonstrated in the current situation of COVID-19 infection, the results of which—from diagnosis to treatment with drugs and vaccines still do not give the results that experts assumed at the beginning of the pandemic in late 2019. There is still no published serious EBM study we can rely on and compare our hypotheses and conduct adequate research of our own, including numerous scientific and research renowned institutions in the world. Even after the publication of a large number of articles on this clinical relevance in serious scientific journals, and there are hundreds of thousands of them stored in world scientific databases, only a relatively small number of articles lead to a current change in health status or clinical practice. Regardless of what has been said, researchers can in principle enjoy the fruits of their work through:1

The most important satisfaction for any scientist should be the realization that the result of research in a certain way in the future will affect at least one person to be healthier, which should be fundamental to the realization of research in practice—at universities or specialized scientific laboratories and institutes.1,18-21

Author Kathryn H. Jacobsen in her book “Introduction to health research methods: a practical guide”1 states that each research process consists of five steps: identifying the problem we want to investigate; choosing the method of research; setting goals, and then making a study design, perform data collection, perform their processing and analysis, and finally write conclusions and recommendations related to the obtained research results.

Scientific researchers in the field of medicine communicate with each other through published articles or through presentations that are published at scientific and professional conferences. Research not published in a publication that makes the results available for reading and application cannot affect practices that can make people healthier. This is one of the key reasons that scientists are encouraged, especially young people, to publish their work in a scientific or professional journal, visible in bibliographic and index databases, or through academic platforms such as ResearchGate and Academia.edu.

Research in medicine can be different: laboratory research, clinical research, and research in the field of public health.22 All three types of these scientific research are important for the well-being and well-being of the community, as well as its individual. They are essential for improving clinical and socio-medical practices, whether of a strategic, tactical, or operational nature, and for implementation through institutions implementing their policies aimed at identifying health problems and/or improving methods to promote health and prevent disability, disseminating scientific literature foundation for future scientific research, policies, and practices. For the scientist personally, it represents the acquisition of new knowledge from the systematic study of topics and the development and improvement of new current skills applicable in practice. In the past decades, science and technology have taken precedence in the development of modern society and scientific research. In any case, it is imperative to respect ethical principles, rules, and principles in the implementation of any research, because only in this way can adequate answers be reached to many questions that today affect humans individually, but the world’s population globally if it is happening right now. The production and exchange of knowledge on important issues of human existence determine the relevant communication among scientists locally and globally through published articles, books, presented at scientific conferences, and similar. In principle, every researcher should primarily have the role of contributing to the development of the professional community to which they belong, but this also opens the door for eventual personal advancement in their academic and scientific career.

Sources of scientific information, then methods for their evaluation, and the methodology of their use are key elements for more serious scientific research and its publication.7 Society determines the rules of conduct and the rules of the game for scientific activity; however, scientific cognition still depends on procedures that, at least in the initial phase, rely on the individual researcher, and this largely depends on the creativity and skills, and individual talents. Creativity and critical thinking are just some of the essential characteristics of the scientificresearch process, and a distinction should be made between those of a scientific and those of a professional nature. Professional articles do not have the methodological, structural, and content character of scientific research and do not deal with scientific problems. Their primary goal is to acquaint readers with facts and insights that are not new in the field of scientific discipline to which they belong by their nature of work and their primary purpose is to transfer knowledge and enable the acquisition of knowledge to students, colleagues, and health care users in general. Scientific articles aim to solve some scientific topics and problems, using scientific methods, appropriate technologies, and tools, then applying styles of expression that include adequate presentation of arguments and attitudes, which give readers and users of the conclusions a solid basis to treat them as a scientific contribution to a particular scientific field.

According to the content and character of the topic and the time needed to prepare an article for possible publication in a scientific journal, articles can be classified into several categories: monographs, articles in journals, professional news articles, proceedings of scientific conferences, etc.1

SCIENTIFIC AND ACADEMIC JOURNALS

Scientific activity over the last few decades has been intensified by the advancement of Information and Communication Technologies (ICT), which have provided scientists and researchers with easier and better innovative opportunities to engage in science in various and new areas. ICTs enable the application of creative industry ideas, including the combination of text, image, and sound.1

The journal is one of the basic communication media, especially in the field of natural, technical, and biomedical sciences. The most important role of scientific journals is the publication and dissemination of scientific articles. The source of scientific and technical information can only be a human—a scientist or an expert whose scientific and professional work creates knowledge about a field. The primary publication is a document that contains a text with basic information in the original form prepared by the author. Biomedical journals can be divided into four groups according to the issue they cover: narrowly specialized journals (processing material from the immediate area), general biomedical journals (intended for a wide range of users), classical journals (training a problem from only one biomedical field), and primary scientific journals (professional literature and the main source of scientific information).

Journals are one of the most important products and sources of information for scientific research and are an important link for the success of development in science. Scientific journals in printed and electronic form have become a necessity in the proliferation and distribution of scientific knowledge.6 Their quality is enhanced by developing and adhering to quality and scientific standards, publishing articles prepared according to the rules of relevant associations that bring together editors and experts in Science Editing, and following Guidelines and templates in the acceptance process for publication, strictly applying review rules and revision process. The careers of many university professors and researchers in academic institutions depend on the positive results of the evaluation of published articles.

The top category is a scientific journal—a periodical (weekly, monthly, bimonthly, quarterly, semi-annually), whose purpose is to improve science through the publication of new research. Most journals are narrowly specialized in a field of science, although there are journals that publish articles from all fields of science. The history of scientific journals begins in 1665 when the French “Journal des scavans” and the English “Philosophical Transactions of the Royal Society” began to periodically publish research results.1 Article in a scientific journal presents the latest research and results in the field covered by the journal. Articles published in these journals are often incomprehensible to anyone except researchers in the field covering the scope of the journal. Most journals today, which include some of the bibliographic, index, and citation databases (WoS, Medline, Scopus, Embase, Hinari, etc.) are published in electronic form and almost all have an electronic way of reporting and managing obsolete articles by the DBMS system.1

There are considerable variations in articles between scientific fields and journals because there are biomedical, mathematical articles, natural, social sciences, and articles from computer sciences that are sometimes quite long. Some scientific journals publish articles electronically on the Internet. Review articles do not cover specific research but gather the results of many other articles with a specific topic into a cumulative text on the state of the field of science in question. Review articles provide information on the topic and allow scientific information to be sent to the original research. Recently, cross-sectional studies have been intensively published as forms that mainly cover analytical studies based on cross-sections and research analyzes published on given topics and stored in known index bibliographic databases in full form (PubMed Central, etc.). Scientific journals include so-called “short communications,” which are short descriptions of important current research, “research notes,” which describe current research findings, such as, for example, “Scholarly articles” which have an educational character and are longer in content, and Some journals are published exclusively in the electronic form to save money, and electronic publishing is increasingly taking over the rating from the printed one.1 Many publishers immediately publish an electronic version of the journal, as there is no need for a delay as in an article journal, which is often late with the publication, which is one of the main features of the printed edition that is increasingly neglected.

THE BASIC STRUCTURE OF SCIENTIFIC ARTICLES: WRITING THE ABSTRACT AND ARTICLE

The format of scientific articles can vary greatly from journal to journal. Nevertheless, many of them follow the IMRAD scheme, recommended by the International Committee of Medical Journal Editors (ICMJE).1

There is a great need to improve the editing of medical journals, both on the regional and global levels.13 Numerous studies, editorials, expert opinions, and other types of publications direct our attention to weaknesses and mistakes of editing that have or will have adverse consequences to the ultimate goal of writing in health sciences: to discover and establish the truth about medical phenomena. “Guidelines for Editing Biomedical Journals: Recommended by Academy of Medical Sciences of Bosnia and Herzegovina,” written by Izet Masic, Slobodan M. Jankovic, et al.6 aimed to enlist the main principles of editing biomedical scientific journals was adopted at the annual meeting of Academy of Medical Sciences of Bosnia & Herzegovina in 2020.3

“In total 14 recommendations were made, based on A to C class of evidence. The editors should educate potential authors and instruct them how to structure their manuscript, how to write every segment of the manuscript, and take care about the correct use of statistical tests. Plagiarism detection software should be used regularly, and statistical and technical editing should be rigorous and thorough. International standards of reporting specific types of studies should be followed, and principles of ethical and responsible behavior of editors, reviewers and authors should be published on the journal’s website. The editors should insist on registration of clinical studies before submission, and check whether non-essential personal information is removed from the articles; when essential personal information has to be included, an article should not be published without signed informed consent by the patient to whom this information relates”.3

Structure Form of the Abstracts and Full Articles Following BOMRAD form

Structure Form of the Abstract

As proposed in this document, scientific articles in almost all cases need to have the following structure: Abstract with defined and structured parts: Background, Objective, Methods, Results, and Discussion, and for didactic reasons the BOMRAD acronym is used. The same structured form must be, also, followed in the full text6 (Fig. 1):

Fig. 1: Example of the deposited abstract which has displayed at https://www.bibliomed.org/?mno=35102 Platform (11)

  • B–BACKGROUND

  • O–Objective

  • M–METHODS (METHODS AND/OR MATERIALS)

  • R–RESULTS

  • A–and

  • D–Discussion, and Conclusion

Title

The title of the article should be as short and clear as possible in describing the content of the article. We can say that the title is a summary of the abstract.1 The title should accurately describe the content of the article. There are two types of titles: An indicative title—talks about the work that covers and an informative title—conveys the message of the article and is recommended for beginners. A good title should be (1) Short, (2) Correct, (3) Clear, (4) Complete, (5) Informative, and (6) Attractive.11 It should also include characteristics of the article, showing what is most important in the work, the same terms as in BOMRAD should be used without abbreviations, and sometimes in the form of a question.6

Name(s) of the Authors and Their Institutions

It is necessary to specify the names and surnames (full texts) of the authors and coauthors who participated in the editing of the article, and also their affiliations. Instructions of the journal to which the article is submitted must be respected (instructions for authors). This is very important for articles that prefer to be published in journals deposited in the PubMed Central database.

Full text of the Abstract

Abstract/Summary and Title can be written in two forms: Reference and Information. It can be written in the author’s native language and English. The structure of the Abstract/Summary should look like this: Background, Objective, Methods, Results, and Conclusion, or: Introduction, Aim, Methods, Results and Discussion, and Conclusion (for original articles, while other articles, like reviews, case reports, case studies, etc., may follow different structure) (Fig. 1).6 In the Methods section, the authors should describe the study sample and outcomes. Abstract or summary is the distillate of which will be presented and should show: what has been done, what are the results, what the results mean. The abstract is a summary of the article and is placed at the beginning of the text. This summary is usually without value judgment, interpretation, or criticism and may also contain bibliographic references that refer to the original document. An abstract can be descriptive or informative. It helps the reader to choose to read the entire article while providing them the information to become familiar with key elements of the text without going into too much detail.2

Structure Form of the Full Text of the Articles

BACKGROUND/INTRODUCTION

An introduction is part of the article with a list of already known facts presented to inform readers on the topic and research issues and provide a basis from which the discussion is written later in the article. Writing an introduction has its own rules: a clear definition of the problem, and why exactly the chosen issue was studied, while there is no need to explain what can be found in the textbooks.6

OBJECTIVE/AIM

This part of the article must be described the (or aims) of the study clearly explained what author(s) define which outcomes of the research/investigation they expected to receive.

MATERIALS AND METHODS

In the materials and methods, all the elements and the manner of conducting the research are presented. Materials (patients) and methods describe how the study was conducted and what are the characteristics of the sample (experimental group, controls, and their properties). It is necessary to explain what is researched, asked, and tested as follows: sampling (random, consecutive, and representative), the sample size, patient gender, age, and the criteria for exclusion from the study, as well as control group- if any. It should describe how the research was done: type of study (prospective, retrospective, or combined), data collection (surveys, inventory, or check-up), and the technique of measuring results (operative treatment, laboratory tests). It is necessary to specify where the research was conducted and its duration.

RESULTS

Results are an important part of writing an article.1 The research results are usually most carefully read and should be a detailed plan, well-documented and comprehensive. Results are the most important part of scientific research. Consequently, both geographical and text representations of results must be provided. Results can be displayed in tables or figures, according to authors’ preferences, while presenting the same authors should avoid the presentation of data in tabular and chart format. The relevant facts must be highlighted and displayed. It is not acceptable that the reader wanders through the figures and charts without being able to get a clear picture of the importance of the presented results.

DISCUSSION

Discussion is a critical review of the data described in the results. The results should be compared with other findings and discuss the theoretical and practical research outcome.6

CONCLUSION

The conclusion seems to be the logical sequence of the previous two sections and it does not recount results, but combines them in a clear and understandable context. The conclusion should be short, clear, and precise. It is necessary to write the final statement of what logically follows from the results of the work, list only the most important, and give the message. Good conclusions should not surprise the attentive reader.

The List of References Used

In scientific circles, the reference is the information that is necessary to the reader in identifying and finding sources used.1 The basic rule when listing the sources used is that references must be accurate, complete, and should be consistently applied. On the other hand, quoting implies verbatim was written or verbal repetition of parts of the text or words written by others that can be checked in the original text.6

Preparation of an Article for Publication in a Journal

Finally, an article should be prepared for publication, and there are several reasons why researchers should publish. Some of them are:1

  • Possibility of conducting a scientific dialogue

  • Receiving a critical review

  • Showing respect to participants and partners

  • Facilitating future research

  • Personal satisfaction.

First of all, before writing their articles in the form for submitting on website of the journal, authors need to read and follow instructions for authors, which every journal has on website, and also, in printing form (in every issue or at least in the first issue of the volume). The article must be prepared following recommendations in the template, also, deposited on website of the journal. Instructions and templates are designed according to the rules of ICMJE, COPE, and EASE.6

  • Authors of the articles have obligation to sign documents: Copyright Assignment Form and Author’s Contribution Form, and also declarations about Patients Consent Form (for a study with patients included in the investigation), Conflict of Interest Form, and Financial Support and Sponsorship Form, eventually Acknowledgment and Statement of Committee of Ethics from an appropriate institution, when it is necessary.

  • Authors need to write their article with structure by using BOMRAD Form, because almost all databases now request it, especially if the database deposits the full text of the article, like PubMed Central.

  • Finally, authors must keep themselves from unethical behavior, regarding authorship, affiliations, and plagiarism.

  • Every author must add in his/her article ORCID ID (open at www.orcid.org) because it can help reviewers and editors to manage the article during its processing (editing, checking plagiarism, assessing the quality of the content, etc.).

  • The fact is that scientometrics and online databases have a great influence on the development of the quality of the articles by measuring scientific contents of published articles using IF, Scopus h-Index, Google Scholar Index, etc., which today ask every academic or scientific institution when making the election in some of the academic or scientific title.

PROCEDURES OF ACCEPTANCE AND SELECTION OF ARTICLES FOR PUBLICATION IN JOURNALS

The Concept and Significance of the Review

Publishing the results of scientific research is a key phase of scientific activity and the standard way to do this is to publish an article in a reputable scientific journal. Of course, this is preceded by an assessment and review of such contributions, regardless of the thematic area to which they belong. Among the first examples of the evaluation process is the one from 1665 initiated by Henry Oldenberg, founder and editor of the Philosophical Transactions of the Royal Society in London, which was the earliest scientific publication of its kind in English1. However, the literature states that as early as the 9th century, the Arab philosopher Abu Yusuf al–Kindi (cca.800–870) gave his written article–Risala to colleagues for their critical appraisal of what was written, which testifies to the long history of the review. Review is a process in which a manuscript or research proposal is read and evaluated by experts in a certain period, for the subject area, language, and document in which the author deals. The Commission of Experts, consisting of prominent experts in the field of knowledge of the author, prepares an analysis and evaluates his work.

Thus, a review is an expert‘s opinion (lat. “Recensare” means to carefully review, show, peer review) and is an independent criterion, that is, the reviewer itself is not related to any specific scientific work or with the authors of the publication.10 It is considered a competent criterion because today’s propulsive development of science reduces the number of competent experts for certain narrow areas of research. Review is one of the main forms of informing about the content of a certain text, taking a critical attitude toward it. It is characteristic of a review that it does not unconditionally strive to present all the important contents of the document and it does not have to be short. The main purpose of the review is “assessment of originality and scientific acceptability, and verification of citations from the literature about relevance, recentness, and adequacy.” When reviewing the article, the language or style in which the article was written must not be neglected.

The following are important for the scientific significance of the article: (1) Does the author show knowledge of current events in practice?; (2) Are the research process and process in line with professional standards?; (3) Does the author offer original arguments and provide valid facts for his research work?

If the article does not meet all the criteria, reviewers suggest a revision that will correct the article before accepting it. In general, peer review is a series of procedures in evaluating the creative work or research results of other authors, working in the same or a related field, to maintain and improve the quality of work or applying the results in practice.1 Reviewer do identify values and point out mistakes so that someone’s work gets a chance to be published. Reviewers evaluate which work will be published in more or less prestigious publications and evaluate those works accordingly, which is important as a recommendation for advancement in an academic career, but also for improving the social status of the author himself. The review process is particularly rigorously applied when indexing journals to appropriate databases (Web of Science, Medline, Scopus, EBSCO, Hinari, Embase, etc.). Reviews are conducted in many professional fields, such as academic and scientific research, biomedicine, medicine, and engineering. The selection of projects to be financed by republic funds is especially important. A review serves publishers to decide whether a book, journal, or article will be published. Reviews, often with good reason, are the subject of critical remarks, especially because they can sometimes slow down the process of publishing someone’s results, which is a particular handicap when it comes to contributions to prestigious journals.

Equally important is the role of evaluating the articles received by the reviewers of a particular journal. Namely, the review procedure plays a key role in checking the methodological correctness, interpretation, and conclusions of the research results described in the articles.1 The next function of the journal is the protection of the intellectual property of the author, and its presentation to the scientific community, that is, providing a way to gain professional recognition and advancement. Today, scientific journals have a significant role in the implementation of scientific policy, that is, decision-making in science because review opinions give a particular journal a rating that can influence decisions on financial support for scientific projects, ranking of academic and scientific institutions, and academic advancement of individuals. The number of scientific journals in the world is growing every day, and it is almost impossible to keep track of what is published in the field of interest of scientists, which requires selecting literature to which individuals will devote their precious time reading them. For this reason, scientometric indicators of the quality of journals and the articles published in them are used to buy and store such journals in libraries and thus facilitate their users’ decision-making in which journal to apply the results of their research.

Zwemer1 lists seven criteria for assessing journal quality:1 (1) High standards for manuscript acceptance; (2) A representative editorial board with appropriate representation of individual disciplines; (3) Critical review process; (4) Regularity of publication; (5) Indexation in main databases; (6) A high degree of trust in the published content by the reader; (7) High frequency of citations by other journals.

Evaluation of a scientific article refers to finding quantitative indicators (indices) of the success of scientific research. The science that deals with this is called scientometry.

So far, no completely satisfactory criterion for evaluating scientific work and scientists has been found, because each offered criterion has more or less its shortcomings. In practice, these two types of criteria are applied for the evaluation of a certain scientific work: (1) Qualitative: review, that is, expert opinion, which is considered the most reliable, but also the most unobjective criterion, and (2) Quantitative: scientometric indicators, which are considered the most objective but also the most unreliable criterion.1

The review consists of two main parts - one is intended for the editor and the other for the author. Assessors receive special forms from most editors in which the grades of individual aspects of the attached article are entered. “Manuscripts of articles are subject to professional, linguistic, and editorial review in terms of general professional and journalistic norms of the journal. The manuscript of the article will be accepted for publication based on favorable reviews. These forms should certainly be filled out carefully.” In addition, there is usually one blank page for comments to the editor, and one or more blank pages on which comments to the author are written. No part of the review should be written by hand, as due to illegibility some important remarks may go unnoticed or be ignored.

Despite its shortcomings, the review is still an indispensable part of scientific publication. It is useful not only to the editors of the journal and the authors of the articles but also to the reviewers themselves. Reviewers receive the privilege of insight into the latest research and as yet unpublished results of colleagues working in their field of work. By reviewing, they hone the skill of critical appraisal of scientific articles, which can also be useful in their professional work and training.

What is a Review for?

A good review, one that essentially delves into the depth of the research, and is itself clear, significantly increases the scientific value of the publication being evaluated.10 The reviewer has the role and task of an educator and in principle, his remarks comments enrich the author’s knowledge and ability to conduct research and interpret the results of that research. However, the review process also has many imperfections and flaws. The subjectivity of the reviewer’s assessment is in the first-place errors in the assessment of the quality of work. Critics claim that the review process is slow, expensive, biased, and subject to abuse. However, the fact is that without reviewing articles, editors would not be able to edit journals, because the review is the backbone of editorial work, and publishing articles is the basis for gathering human knowledge. So, the one who wants to publish the results of his scientific research must automatically accept to be a reviewer to an author. Reviewing is also a learning opportunity, it is a source of the latest information, but it is also a challenging and stressful job, but it increases the reviewer’s knowledge and information, for most it represents pleasure and beauty. In addition to the privilege of having the reviewer the opportunity to read some scientific facts before all other readers from as yet unpublished results of colleagues in his field, he also increases his skill of critical appraisal of scientific articles, which can be useful in his professional work and training. For a review to be well done and written, the reviewer must be able to evaluate the work objectively, even if he does not like the work personally. To achieve this, the evaluation rules and the legality of the evaluation must be respected (Fig. 2).1

Fig. 2: Sarajevo Declaration on Integrity and Visibility of Scholarly Journals.4 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5209927/

In the reviewer’s opinion to be of good quality, the following should be observed when evaluating the text: (1) Responsibility—which means that the evaluator should have a clear sense of responsibility toward his colleagues and make the evaluation on time, honestly and as best he can. You should leave your impression to yourself, and write a review realistically and objectively. The quality of the article’s evaluation is determined by the general evaluator’s responsibility for the work he/she does; (2) Knowledge of literature—the reviewer should be well acquainted with the relevant literature and be able to apply general scientific research principles related to the content of the work being evaluated and place the article in the context of previous articles in this field. The reviewer should carefully study the instructions for the authors of the journal for which he is evaluating the articles. (3) Time—it depends on the complexity of the opinion to be written and contains a comparison of the correspondence of the topic and the content of the text with the expertise of the reviewer. It should not be longer than a few hours, but for vaguely written articles probably much longer. (4) Knowledge of the journal for which it is evaluated—scientific journals differ in editorial policy, priorities in publishing, and the percentage of rejected papers, which a good reviewer should know and keep in mind when writing a review. Even, if it is necessary to recommend to the authors a suitable journal if, in his opinion, he concluded that the text he evaluated is not adequate for the journal in which the paper has already been applied.1

The review work is very responsible and delicate because it is the basis of the editorial board’s decision to publish the article. With their suggestions and evaluations, reviewers significantly contribute to the quality of work. The reviewer should answer a few key questions:1

  • Is the article genuine? (What is the information value of the work, that is, how much is it scientifically valuable?);

  • Is the article relevant to most readers of the journal (for whom is the article intended?);

  • What results of applied research does it report?;

  • What results of experimental research does it offer?;

  • What is its practical value?;

  • Is the level of exposed substance acceptable, as follows:

As a rule, each article must be: (1) Scientifically acceptable (methodology, presentation of results, discussion, citation); (2) Documentary acceptable (quality of tables and figures, statistical processing); (3) Linguistically acceptable (comprehensibility of the text, correctness of terminology, stylistic and orthographic arrangement); (4) Formally acceptable (whether the title of the article corresponds to the content, whether the manuscript is composed according to the propositions of the journal, whether it contains all the required parts, etc.). All editions of better medical journals send reviewers forms, which they must fill out.

Procedure and Method of Article Review

In the first reading, the reviewer should try to understand the article and ask questions related to the observed ambiguities. The first reading is like triage—the reviewer after the first reading decides the importance and relevance of the research.12 The reading goes in the order in which the article is composed: Reading the abstract instructs the reviewer to pay attention to reading the full text, especially related to research settings, procedures, results, and conclusions. The reviewer then focuses on what is the key scientific research problem the article is writing about and what its messages are. In the second reading, the reviewer, after a few hours or days (depending on the time available for reading), evaluates the values of the article by checking the questions and remarks that the reviewer recorded during the first reading. The reviewer applies the principle—whatever the reviewer does not understand, the readers will not understand, so the reviewer should be free to object to anything that hinders him in reading and understanding the article. One should not criticize the general style of the article, one should not correct errors in grammar, spelling, and punctuation (this is the job of a proofreader), but one can suggest a general assessment of the linguistic quality of the work to the editor. The second criterion is the assessment of the scientific quality of the article, and the strength of the quality of thinking and respect for scientific principles and knowledge in the field from which the article is. Finally, the reviewer gives their expert opinion and presents an opinion for assessing the weight of the research procedures, data, and conclusions. Only the article that is scientifically strong and brings some new knowledge in the field of science from which the content of the article comes is important. The value of the article is not assessed according to whether it is from the field of basic medical research or the clinical or public health character of the research, but whether it is clinically attractive and whether part of what is concluded can be applied in practice and be socially useful.

SCIENTOMETRY AND ITS ROLE IN QUALITY ASSESSMENT OF PUBLISHED ARTICLES IN JOURNALS

Terms and Definitions

Scientometry is a part of Scientology (Science of science) that analyzes scientific articles and their citation in a selected sample of scientific journals. The name bibliometrics was introduced in the 1970s to denote quantitative research of communication processes by applying appropriate mathematical and statistical methods to books and other communication media.2,18-24 Almost at the same time, in the countries of the former Eastern Bloc, the name scientometry derived from the Russian word scientometry was introduced. More precisely, in 1969, the name scientometrics was introduced, which refers to the scientific field that deals with the research of science as an information process using quantitative (statistical) methods, and later Tibor Braun (who founded the international journal Scientometrics in 1977) introduced the name scientometry.14 “Scientometry was defined by its creators (as Naukometriya in Russian) Nalimov and Mul’chenko (1969, p. 191; 1989) as “the application of those quantitative methods which are dealing with the analysis of science viewed as an information process,” although the idea of keeping an index of citations originated in 1873 with Shepard’s Citations, in the United States common law, which enabled previous court decisions to be looked up with ease.1 During the evaluation of geologists (not only academic), but letters of recommendation are also increasingly supported by the number of papers published in peer-reviewed journals, the number of citations, and such evaluation factors as h or g”.1

Bibliometric methods are used for quantitative analysis of written materials. Bibliometrics is closely related to one broader term “informetry” and a narrower term “scientometry”.1 A close analogy has “webometry,” which explores various aspects of the web. This type of analysis is based on the identification of publications in the broadest sense, in a particular scientific field. The analyzes cover various material categories and range from articles in journals, books, articles, and patents in the „gray literature” category.

Webometry refers to the quantitative analysis of the production of sciences, applications, structure, and technology in a cyber environment. Impact analysis, web collaboration, and recognition of basic web pages is considered to be the highest practical advantage of webometry.18

Informetry: In 1979, Otto Nacke introduced a new metric concept of informetry, which seeks to include part of the information sciences aimed at measuring the phenomenon of information, the application of mathematical methods in solving problems of disciplines, bibliometrics, and information retrieval.1 According to Diodat’s (introduced 1994) dictionary, informetry is sometimes used as a synonym for bibliometrics. However, some authors are more inclined to understand informatics as a discipline that covers a much wider area than bibliometrics itself. Nacke et al. point out that informatics and scientometrics are two sister areas within the information sciences. In 1984, an IT committee began to operate within the Federation Internationale de la Documentation (FID), and O. Nacke was elected the first president. From the very beginning, the board accepted informetry as a generic term for bibliometrics and scientometry. Part of the credit for the popularization and increase in the number of informatics research belongs to L. Egghe and R. Rousseau. Their book “Introduction to Informetry: Quantitative Methods in Library, Documentation and Information Science” was published in 1990 by Elsevier. According to the two authors, informetry deals with measurement, mathematical theory, modeling of all aspects of information, and storage and retrieval of information by “borrowing tools (techniques, models, identities) from mathematics, physics, computer science, and other [-]metrics.”1 Informetry, according to the aforementioned author duo, is applied in library management, sociology of science and knowledge, history of science, scientific policy, and information retrieval.1

The Role and Significance of the Application of Scientometry in Scientific Practice

Scientometry is the science of measuring and analyzing science using qualitative, quantitative, and computational approaches. Scientometry, with its various indices, is a reliable method for assessing scientific development, while bibliometrics denotes a quantitative study of the communication process using books and other communication media.

Scientometry comes from the Russian language. Namely, in 1969, the name scientometry was introduced into the scientific field to study information processes a quantitative method. Later, Tibor Braun officially introduced the name scientometry and founded the journal Scientometrics.1 The term scientometry is attributed to the book of the same name in Russian “naukometrija” (наукометрия)1 published in 1969.1 Vassily Vassilievich Nalimov, a Russian mathematician, published his first work that can be interpreted as scientometric 10 in 19591 and is considered an author with a recognizable contribution to the information sciences.1 Nalimov and Mulchenko, authors of “Naukometrija,” define scientometric research as one that views science as an information process by applying quantitative (statistical) methods.1 Modern scientometry is mainly based on the work of Derek J. de Solla Price and Eugene Garfield (Garfield founded the ISI—Institute for Scientific Information, which was considered the father of scientometry and a method of evaluating scientific publications.). Derek John de Solla Price is often mentioned as the originator of scientometry and his book well-known and beyond the boundaries of information science “Little science, Big science”1 in which Price claims: “Science is a measurable substance, consequently, the manpower engaged in science, the scientific literature, talent and expenses afforded to science can be measured by properly selected statistical methods.“In an editorial in the first issue of Scientometrics, de Solla Price states “we would be bad scientists if we could not use our professional analytical tools in our activities.”1 The core of scientometry, therefore, derives from the observation of science as a measurable substance, that is, the observation of actors, and the input and output of necessary processes (Fig. 3).

Fig. 3: Scientometrics Journal (https://www.springer.com/journal/11192

The development of scientometry itself is important bibliometrics as a source of bibliographic metadata for a central approach in scientometry, however, an important topic in scientometry is the monitoring of echoes that are publications. This echo, and the previously mentioned relations of trust and validation, are formally present through quotations which are an important variable of scientific texts. Therefore, in addition to standard bibliographic records on necessary publications, formally recorded information on the network of citations among these publications is also necessary. The database, in addition to bibliographic information, also contains information on citations, is called the citation index, which is among the central concepts that have enabled the status of scientometry as a recognized separate discipline. The author of the first citation index of articles published in scientific journals is Eugene Garfield, who, along with de Sollo Price, is often associated with the founders of Scientometry. Garfield proposes the first citation index of science in 1953, modeled on Shepard’s citations,30 a citation index of legal documentation, the first versions of which Frank Shepard began applying as early as 1873.1

The Most Significant Scientometric Indices in Application

Some of the indicators used in the evaluation of scientific work are:1

  • Impact factor

  • Citation of the article

  • Journal citations

  • Number and order of authors, etc.

The impact factor is the number of citations of articles published in the journal during the previous two years divided by the total number of articles published in that journal in the same period. The impact factor depends on the quality of the journal, the language in which it is printed, the area it covers, and the distribution of the journal. The impact factor (IF) in the academic journal is a measure that reflects the average number of citations of articles published in the journal. The impact factor is used to compare different journals in a particular area. In a given year, the impact factor (IF) of the journal is the average number of citations received per article published in that journal during the previous two years. The Hirsch index (h-Index) is an index that attempts to measure the productivity and impact of the published work of scientists. The index is based on the most cited articles and the number of citations that articles received in other publications. This index can also be applied to the productivity and impact of a group of scientists, such as a department or faculty, as well as a journal. H-Index was proposed by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining the relative quality.27-30

Citation provides guidelines for scientific work because it encourages scientists to deal with the most current areas of research. Thus, the “terror of scientometric indicators” organizes scientific work at the world level, it directs it and gives it an adequate and practically acceptable form. Citation is influenced by: quality of work, understanding of the work, language in which the work is written, loyalty to a group of researchers, type of work, etc. Most scientific articles are cited by inertia because every scientist has a set of articles that he cites whenever he writes about a topic. Other articles are cited to make one raise the citation itself, a third because it is required by a reviewer or editor of a journal, and so on. Maybe only every fifth or tenth work is cited because it really should have been cited. These are the works whose data the author uses directly when writing a discussion of his work and comparing his results with others about the presented problems and solutions. All persons listed as authors of the article must meet the following conditions: that they have significantly contributed to the planning and production of the article or the analysis and interpretation of the results and that they have participated in writing and correcting the article, and that they agree with the final text. The editor has the right to ask the author to explain the contribution of each of them, signing (each coauthor individually) the document “Author’s contribution.” The contribution of one author is 1, and if the article was written by several authors, their contribution is 1/n. The contribution of each subsequent author is half less than the previous one.1 The order of the authors is determined by the agreement of the authors. All persons designated as authors of the work must meet the following conditions: that significantly contributed to the planning and preparation of the article or the analysis and interpretation of results and participated in writing and correcting the article and that they agree with the final version of the text.18,31-39

Persons who are involved in data collection or superior to researchers, but are not actively involved in the development work cannot be the authors. The editor has the right to ask the author to explain the contribution of each of them. The contribution of one author is 1, and if the article was written by several authors their contribution is 1/n. In doing so, the contribution of each of the following is half of the size of the previous one. The sequence is determined by the author’s agreement.2

According to the recommendation of the International Commission for the professional self-regulation of science the rules of good scientific practice are related to the basic principles of scientific research, and some of them are:1,31-39 (1) Preservation of professional standards; (2) Documenting results; (3) Strictly fair relation to contributing associates, competitors and predecessors; and (4) Scientific publications. Although the truth should be the aim of scientific research, it is not a guiding fact for all scientists. The best way to reach the truth in its study and to avoid methodological and ethical mistakes is to consistently apply scientific methods and ethical standards in research. Errors in science can be:1 (1) Unintended; (2) Intentional; (3) The gray zone; and (4) Fraud/deceptions.

From Table 1 It is clear that the h-Index of the oldest biomedical journal Medical Archives is significantly higher with an h-Index of 24, which means that the scientist who in this journal published 24 articles have at least 24 citations for each work in other journals. Table 2 are presented the h-Index in several countries of the world extracted from the SCImago Journal and Country Rank bibliometric list for the year 2020. If we check the SCImago rank list of citation number of published and stored papers in the Scopus database written by authors from former Yugoslav countries we can see in the list that Slovenia is in the first place (1,824.243 citations, h-Index is 349), Croatia has 1,417.239 citations and h-Index 324, Serbia has 1,276.485 citations and h-Index 290, North Macedonia has 168.037 citations and h-Index 135, Bosnia and Herzegovina has 125.626 citations and h-Index 118, and the last one is Montenegro with 45.225 citations and h-Index 74. For the same year in SCImago rank list three academicians of AMNuBiH, who work or live out of Bosnia and Herzegovina, has been cited more times in the 2020 year than all authors who were cited in the Scopus database.18

Table 1: Presentation of biomedical journals in B&H ordered by the h-index values. https://www.scimagojr.com/journalrank.php
Title Type ↓ SJR H index Total Docs. (2020) Total DOCS (3years) Total Refs. (2020) Total Cites (3years) Citable Docs. (3years) Cites/Doc. (2years) Ref./Doc. (2020)
1 Bosnian Journal of Basic Medical Sciences journal 0.738 Q2 25 65 154 2702 458 152 3.05 41.57
2 Medicinski Arhiv journal 0.315 Q3 24 96 285 0 393 281 1.05 0.00
3 Acta Informatica Medica journal 0.267 Q3 20 35 166 1 131 238 159 1.50 32.31
4 South East European Journal of Economics and Business journal 0.266 Q2 13 20 52 1056 55 52 1.06 52.80
5 Periodicals of Engineering and Natural Sciences journal 0.225 Q2 11 222 340 5154 421 340 1.16 23.22
6 Sport Science journal 0.207 Q3 19 73 142 2010 69 141 0.47 27.53
7 Acta medica academica journal 0.192 Q4 13 32 93 0 82 89 0.77 0.00
8 Medicinski Glasnik journal 0.191 Q4 13 85 126 2378 118 126 0.99 27.98
9 Electronics journal 0.128 Q4 10 10 32 328 14 28 0.60 32.80
10 Journal of Health Sciences journal 0.112 Q4 3 31 54 865 10 54 0.19 27.90
11 Central European Journal of Paediatrics journal 0.111 Q4 2 21 84 541 5 67 0.09 25.76
12 Acta Medica Saliniana journal 0.105 Q4 4 0 46 0 6 46 0.14 0.00
Table 2: Presentation of bibliometric list of SCImago Rank for several countries in the world, ordered by the h-index values, for the year 2020. https://www.scimagojr.com/countryrank.php
Country ↓ Documents Citable documents Citations Self-Citations Citations per Document H index
1 United States 138,17725 119,86435 384,398099 168,230420 27.82 2577
2 China 745,4602 722,9532 782,01759 448,17420 10.49 1010
3 United Kingdom 403,9729 334,7117 102,878206 228,08209 25.47 1618
4 Germany 351,5309 315,1775 814,54056 194,04148 23.17 1429
5 Japan 307,4206 289,5478 541,30480 135,73127 17.61 1118
6 France 243,7589 220,3243 558,58552 112,60558 22.92 1286
7 India 212,8896 194,6730 222,18913 752,6767 10.44 691
8 Italy 207,2168 184,0490 437,60942 100,35285 21.12 1135
9 Canada 203,7509 179,6688 528,25596 884,1600 25.93 1299
10 Australia 163,8743 142,3945 379,37045 750,1967 23.15 1115
11 Spain 162,8362 146,8464 325,33936 692,7908 19.98 1010
12 Russian Federation 135,9443 130,2809 111,35903 372,6592 8.19 652
13 South Korea 130,7978 124,9982 202,38524 378,2419 15.47 762
14 Brazil 114,5853 106,7185 147,01837 468,4306 12.83 649
15 Netherlands 113,1975 998,112 343,85395 497,0776 30.38 1133
16 Switzerland 845,108 745,124 264,79916 325,3687 31.33 1085

In Table 2 presentation of the bibliometric list of SCImago Rank for several countries in the world, ordered by the h-index values, for the year 2020. https://www.scimagojr.com/countryrank.php

CRITERIA FOR EVALUATING SCIENTIFIC WORK IN RESEARCH

So far, a completely satisfactory criterion for evaluating scientific work and scientists has not been found, because each offered criterion has more or less its shortcomings, however, it is considered that the more criteria used, the more objective the evaluation itself.

There are two types of criteria for evaluating a particular scientific work:

Review- expert’s opinion, Peer review is an independent criterion, that is, the reviewer himself is not connected with specific scientific work or authors. It is considered a competent criterion because the great development of science reduces the number of competent experts for certain narrow areas of research.10

Scientometry is a part of Scientology (the science of science) that analyzes scientific articles and their citation in a selected sample of scientific journals.2

Indices for Measuring the Validity of Scientific Research Work

There are four indices through which the validity of scientific research is measured:3

  • number of articles,

  • journal impact factor,

  • number and order of authors,

  • number of citations.

The number of articles speaks more about productivity than quality. It includes scientific and professional articles published in extenso in journals, and books, a monograph published in extenso in journals, and articles published in extenso in indexed journals.2

The impact factor is the result of statistical operations that determine the citation expectations of a publication based on a two-year estimate. This is the value of the journal, not the publication or author concerned. However, there is no doubt that many prestigious journals with a high impact factor publish articles of a high scientific level.5 This is closely related to the high impact factor of that journal.

Depends on:2

  • quality of the journal,

  • the language in which it is printed,

  • the area it covers,

  • journal distribution.

The impact factor is simple quantified data for scientific production, but we must link it to the field of research. There are big differences, for example, the top journal in laboratory medicine is clinical chemistry (IF = 5,454), in nephrology, it is the Journal of the American Society of Nephrology (IF = 7.371), in oncology it is the top journal CA A Cancer Journal for Clinicians (IF = 63.342), and a total of seven journals has an impact factor above 10. Cell (IF = 29) and Nature Review Molecular Cell Biology (IF = 31) are top journals in basic cell science, while 16 other journals have an impact factor higher than 10.1 Impact factor assessment should be done on, for example, the basis of the “loaded influence factor” according to the field of research.2

Journal impact factors are also one of the important parameters for research funding. Editors have great power to select highly qualified reviewers to select articles. They indirectly influence the funding of future research by various scientists and institutions.31

The number of authors and their order is calculated so that one author has a contribution of −1. When it comes to the work of several authors then the contribution of each author is calculated as 1/n, so for example if it is the work of two authors the contribution of the first is 0.7 and the second 0.3 or if there are more authors then the contribution of the first is 0.6, the second 0.3, the third 0.2 and the other 0.1. The contribution of the first author is 100, and each subsequent half less than the previous one.1

Number of citations

Citation is affected by:1

  • quality of work,

  • understanding of work,

  • the language in which the article is written,

  • loyalty to a group of researchers,

  • type of work,

  • benefit in the sense of “I quote you and you quote me”- “benefit” in the sense of “I will not quote him because he is my competitor,” etc.

Scientific echo measures are increasingly used for academic promotion and evaluation.30-39 They are also used for departmental assessments at colleges and research centers. The journal’s influence factor has traditionally been used (a measure developed by Garfield in 1955).30 As an indicator of the evaluation of a scientific publication throughout the world, the representation of articles in citation databases is used, and the indicator of the potential value and impact of non-cities is measured by the number of citations, that is, the status of the journal about its impact factor (IF). These indicators can be obtained solely based on the data and content of the ISI citation databases.30 With the expansion of the Genetics Citation Index, as a multidisciplinary database for the natural and applied sciences, the Science Citation Index (SCI) was created in 1963 and included the literature from 1961.30 The initial corpus of journals in SCI was 600, today there are already over 6000 journals.

The evaluation of scientific activity is most often measured by scientific productivity and its repercussions are measured by citation analyzes.10 Citation analyzes include measurements of citation type, citation number, self-citation, for example, author/coauthor, institution, journal, and country, or independent citations. Given the evaluation of the status of scientists, institutions, or countries, it does not matter in which journals the results of particular research are published, to what extent they are noticed, who noticed them and registered it by citation.6 Therefore, the status of the journal is often used as an indicator in evaluating the scientific work of an individual scientist or institution, given the impact factor (IF) in which the article was published, as well as the status of journals that cite a particular article.10

Unfortunately very few bibliographic and indexed databases take into account monographs, which are the primary source of information, as well as textbooks or student scripts.1

National journals are important to the national scientific community.2 They should primarily serve for training, with brief information for the scientific community; of course, they should also publish original works. The publisher and the scientific community should strive to include it in international databases, especially Scopus, Medline, or the Web of Science.

There are also new scientometric techniques for estimating journals and scientists- citation density, citation half-life, Erdos number (mostly used by mathematicians), or the h-Index.27-29 All these new instruments use sophisticated statistical and mathematical processes. The Hirsch index (h–Index) is defined as the ratio of the number of articles and the number of citations equal to or greater than the number of articles. This index should be used to assess persons joining university staff or prestigious societies. The value of the h index of 10–12 is a sign for prestigious universities for permanent employment without re–election. For a membership in the American Physical Society, an h–Index of 15–20 is required, and for membership in the US National Academy of Science above 45.1

Number of Authors and their Order in a Published Article and Number of their Citations in other Publications

It is calculated so that one author has a contribution −1. When it comes to the work of several authors, then the contribution of each author is calculated as 1/n, so for example, if it is the work of two authors, the contribution of the first is 0.7 and the second 0.3 or if there are several authors, then the contribution of the first is 0.6, the second 0.3, the third 0.2 and the other 0.1. The contribution of the first author is 100, and each subsequent half less than the previous one.1

Scientific echo measures are increasingly used for academic promotion and evaluation.30-40 They are also used for departmental assessments at colleges and research centers. The journal’s impact factor has traditionally been used (a measure developed by Garfield)30 quantifying scientific productivity and scientists based on their publication.27-29 This is a personal index and provides information on the number of publications of the author and the number of citations: departments, universities, or countries.

As an indicator of the evaluation of a scientific publication throughout the world, he uses the prevalence of articles in citation databases, and the indicator of potential value and the impact of non-cities is measured through the number of citations, that is, the status of the journal about its response factor (IF). These indicators can be obtained solely from the data and content of ISI’s citation databases.1

Science Citation Index (SCI)

SCI-expanded is a bibliographic and citation database for the field of natural and applied sciences. It processes content from more than 6,000 of the world’s leading scientific and professional journals. It secretly covers 150 scientific disciplines.

Journal Citation Reports (JCR)

Based on data from the SCI and SSCI citation databases (Social Science Citation Index), Eugene Garfield in 1975. creates. a special statistical database JCR. JCR is a quantitative tool for ranking, evaluation, categorization, and comparison of journals.30 The evaluation of scientific activity is most often measured by scientific productivity and its repercussions are measured by citation analyzes. Citation analyzes include measurements of the number of citations, types of citations, and self-citations, for example, authors, coauthors, institutions, countries, journals, or independent citations.1 Given the evaluation of the status of a scientist, institution, or country, it does not matter in which journals the research results are published, to what extent they are noticed, who noticed them and registered them by citation.2 Therefore, the status of a journal is often used as an indicator in evaluating the scientific work of an individual scientist or institution, given the impact factor (IF) in which the article was published, as well as the status of journals that cite a particular article. However, the use of IF, and especially the so-called. standard or Garfield’s IF, as one of the basic indicators in the evaluation of one’s work, points to the conclusion that it is a matter of not understanding its true meaning.30 An IF journal is a measure of the frequency with which an “average article” in a journal is cited over a period of time.

The impact factor helped in determining the quality of the journal, and not in determining the quality of an individual article, that is, the assessment of the quality of an individual scientist. An IF journal can only be a potential indicator of the value of an article because it is assumed that it has undergone a rigorous review process, and the true value of that article is obtained a posteriori, that is, by the number of citations and the potential impact of that article on the value of the IF journal.1

Mingers, Macri, Petrovici, Jokic, Masic1 used the h-Index and journal impact factor to rank business and management journals indexed in Google Scholar and Web of Science. They concluded that the h-Index is better than the journal’s impact factor, and Google Scholar is better than the Web of Science as the data source.

In a special issue of Scientometrics Braun stated,1,18,29 that talked about the impact factor of the journal, it contained a critique of the use of IF, emphasizing inaccuracy, citation errors, and misuse of IF. The IF usually contains one major number of citations from a specified journal over the last 2 years, and usually relies on an analysis conducted by Thomson Reuters through its Journal Citation Reports.2 Many alternatives to IF have been proposed, which include changes to IF in different fields, rankings, and h-Index approaches. These alternatives are up to one limit of IF variation and include IF improvements. Vanclay proposed a cure for this problem in three options:1 (1) More consecutive citation verification in the already existing IF system; (2) Rank the journal rankings that are more society-based (e.g., TripAdvisor); and (3) That an existing service that includes “Gate-control” is extended to include WoS.1

The scientific society will benefit from independent journal certifications, by providing not only rigorous reviews but also certifying broader quality control standards.1,8 In response to growing concerns about the inappropriate use of IF in the evaluation of scientific articles and scientists themselves, the American Society for Cellular Biology, together with a group of editors and publishers of scientific articles, has formed the San Francisco Declaration on Research Assessment (DORA). Created in May 2013, DORA provides support to thousands of individuals and hundreds of institutions that have adopted this document. The Sun Declaration was signed that day by more than 150 scientists and 75 scientific organizations.1,18

Evaluation of Articles in Academic Journals

Evaluating the quality and relevance of novelties, after the acceptance and publication of scientific articles, which should be the result of serious scientific work, relied mainly on members within the academic community, who have the same or similar professional interests.1 Indexing, citing references, and citations are derived as a term from the concept of index publications used in the Medicus Index, Science Citation Index, and Current Contents, which are the most recent “bibliographic databases” in history.1

Assessment of the scientific contribution of each scientist indirectly increases the reputation in the scientific community of these publications, especially journals, through the so-called Impact Factor. The “Impact factor” shows how much an average scientific article in that journal receives. The idea of the Impact Factor was first mentioned by American researcher Eugene Garfield, in an article published in the journal Science 1955, which was the basis for the publication of the Science Citation Index (SCI) in 1961.30

Today, the Impact Factor journal is taken out of a publication titled Journal (JCR), which was produced by Thomson Reuters publishers. The best measure of a journal’s importance is its Echo factor, which shows how many articles are cited. For example, if a journal has an echo factor of 0.10–0.30 over a period of time, that means that on average every tenth to every third article published in a journal is cited once. In other words, the echo factor tells us how much the journal is used, or how important it is to scientists.1

Indices for Measuring the Scientific Value of Journals, Journal Articles, and Article Authors

Impact Factor and Echo Factor- The Influence Factor of the Journal

The impact factor is one of the quantitative criteria used in ranking, categorization, evaluation, and comparison of scientific journals. It is an “objective tool that allows critical judgment of the world’s leading journals based on quantitative, statistical information derived from citation data.”30

The echo factor measures the frequency with which an average article published in a journal is cited in a given period.30 This indicator, therefore, does not measure the distribution of the citations that individual articles published in a journal receive, but only their average frequency.

One of the longest-running and most well-known indicators of a journal’s scientific value is undoubtedly the Impact Factor. It is a number that shows us how many times an average scientific article is cited in a journal in a given period. Every year, impact factors are calculated for all journals that are referred to in the cited databases (Science Citation Index Expanded- SCIE and Social Science Citation Index- SSCI) and for all journals that were cited in them.30 Based on the obtained results, new journals are selected or existing ones are excluded. Today, the influence factor of the journal is a very widespread criterion for the selection of the journal in many libraries, as well as for the choice of which journal to publish the article in.

Impact factor values for individual journals are published once a year in the Thomson Reuters Journal Citation Reports (JCR) statistical database. This database, created in 1975 by Eugen Garfield,30 can be accessed directly or through the WoS platform. Journal Citation Reports is a quantitative tool for ranking, evaluating, categorizing, and comparing journals. In addition to the impact factor, the scientific value indicators of the journal are the 5-Year Impact Factor, Immediacy Index, Journal Citation Half, and Cited Half-Life.

The impact factor of the journal is calculated by dividing the number of citations obtained in the current year by the articles published in the last two years with the number of articles published in the same period. The impact factor of 1.0 means that, on average, articles have been cited once in the previous 2 years. It is interesting to analyze the influence of the journal’s self-citation on its Impact factor. Excessive self-citation of the journal leads to an increase in the impact factor and an unrealistic ranking of the journal within the subject area.

Following strong criteria and rules every year, a few journals were suspended from the Journal Citation Reports list of impact factors due to excessive self-citation or due citation of articles published in a particular journal in articles published in another, always the same journal, on the principle of the recipient and donor Citation stacking.8 The frequency of self-citations ranged between 59 and 90%. These journals will, in the period that follows, be monitored and checked in terms of meeting all the criteria and standards required for reinclusion in the WoS database.

Since 2007, the Journal Citation Reports has also shown the Eigenfactor Score and the Article Influence Score. This data can also be accessed directly via the Eigenfactor website (http://www.eigenfactor.org). The calculation of these indicators is based on the citation data of the Journal Citation Reports, which means that they refer only to articles, that is, journals that are indexed in WoS databases. The Eigenfactor Score measures the number of citations that articles published in a journal in the last five years have received in the current Journal Citation Reports year, taking into account the data from which journals they come. Journals that are cited more also have a greater effect on the citation network than journals that are less cited. In addition, the Eigenfactor Score is deprived of the influence of the journal’s self-citation, as it excludes references that cite articles published in that same journal. The Article Influence Score determines the impact of articles published in a journal in the first five years after publication. Its mean value is 1.0. A value greater than 1.0 means that all articles published in that journal have an above-average impact, while a value less than 1.0 indicates articles with a below-average impact.1

How the Impact Factor is Calculated

The question arises as to how relevant the above indices are in the previous text for the realistic measurement of the scientific validity of the article and the journal published in it. The nature and uncertainty of the factors influencing scientific journals are specially discussed in the scientific literature. Based on a citation from an article published in a journal over several years, the impact factor was used for various purposes such as: (1) Evaluation of a scientific journal; assessment of scientific qualification and rating of scientists; (2) Criteria for librarians how to choose and which journals to subscribe to for their collection. So far, many factors that may affect the numerical value of IF have been described in the literature.

Because citation variability in the scientific discipline varies, caution should be exercised when using the influencing factor, IF, for interdisciplinary comparisons of assessment. Many journal authors and editors advocate methods and ways to increase the IF of a particular journal. The opinion of the authors is that increasing the quality of the articles themselves, which are accepted for publication in a scientific journal, is the only real way to increase IF.1

Quotations, as part of a scientific journal, have been a useful tool for authors since they have enabled them to cite argumentatively previously published work, without the need to describe it in detail. Quotes allow readers to find prior relevant information on a given topic.

Given the steady increase in the number of scientific articles, the number of their coauthors, and the list of literature per article, it is not surprising that the number of citations is growing even faster than the number of scientific articles. Quotations are often used as a measure of the importance/impact that a given article has had in the scientific community. Citing and self-citing articles can benefit from increasing IF, but care should be taken when it comes to this way of evaluating the quality of a particular article.

Garfield (1955) was the first to mention the idea of using IF.30 His main goal was to find an indicator to select scientific journals to be included in his database called the Science Citation Index (SCI). Also, using citations in an article published in a journal, raising (1960) proposed a measure of journal quality/importance that he called the “Index of the Research Potential Realized.”30 In 1960, Garfield standardized IF and applied it to all journals relevant to SCI. Winkler gave a comprehensive description of IF in his review of publishing scientific journals (2000) and on Factor Influence (2004).30 Over time, in parallel with the data on the variability of the original IF, there have been many attempts to change the IF, but the original concept is still in use.

Thus, the impact factor of a scientific journal is the ratio between the number of citations received from articles published in that journal in a given period. The IF published in the Journal Citation Report (Thomson Reuters, New York, USA) is calculated by dividing the number of citations received by the journal in a given year by the number of citations published in the same journal during the previous 2 years. Thus, the numerical value of the impact factor is obtained by dividing the number of citations for the last 2 years by the number of published articles in those same 2 years.

This agrees with Garfield’s original notion.30 The formula for calculating IF looks like this:1,30

IF (e.g., for 2020) = C / P

  • C is the number of citations in 2020,

  • P is the number of cited texts published in the same journal during 2018 and 2019.

Clarification is needed for the term “citable texts.” It is common knowledge that scientific journals may contain texts (e.g., letters, obituaries, conference abstracts) that are rarely cited. It may happen that a journal with a large number of citations in a given year does not have a high IF, because the number of cited texts published in the previous 2 years also counts.

Also, it should be clarified that a high IF journal does not mean that every article in it has a high citation rate. IF applies to the journal, not to individual articles. It is quite possible that in a journal with a high IF some of the published articles were not cited at all. Several studies, in this case, have testified to this discrepancy between the IF journal and citations from individual articles published in it.

When it comes to citations, including IF calculations, we need to be aware that there may be self-citations among them. They are defined as citations of an article in a particular journal that was cited in a previous article in the same journal. Garfield (1994) calculated that self-citations account for about 13% of the total number of citations.1

Because publication and citation can vary over the years, this can be attempted to be corrected by calculating IF for more than 2 previous years. IF for 5 or 10 years is certainly more confidential than standard (2 years). This difference between standard IF and that of 5–10 years is especially important when comparisons are made between different fields of science since there are some specifics of citation patterns in different fields of science. Of particular relevance to this discussion is studied by Glaenzel and Schoepfin1 that address the aging of published data, using citations they have received. Based on their results, the authors suggested that 3 years be taken to calculate IF, as a useful trade-off between areas of science with relatively rapid aging (e.g., natural sciences and experimental physics), as opposed to those that have a longer duration (e.g., some parts physics and social sciences). Garfield accepted that a longer period may be more relevant for calculating IF in a field such as Clinical Medicine.30

The Journal Citation Report classifies science into about 200 categories. An extended number of years may be a practical solution, but the fact remains that some scientific journals are difficult to classify in any of the categories (e.g., interdisciplinary); many could easily fit into several of them.

How the Impact Factor is Published

The impact factor is published in the Journal Citation Report in June each year for the previous year. The calculations themselves are performed based on the situation in all three read databases (SCI expanded, SSCI, AHCI) on the first day of March. The journal, in most cases, gained an impact factor after two years of reference in WoS. This means that if the referencing of the journal started in, for example, in 2018, it will receive an impact factor for 2020, which will be published in June 2021.

The Science Citation Index (SCI) is closely related to the impact factor. Today it is based on the Web of Science, organized and produced by Eugene Garfield.30 The first volume of the Science Citation Index was published in 1961. This parameter shows us the number of citations of a particular article according to the selected database from the journal. This database contains more than 8000 journals. The monopoly of this database was completed several years ago and today it is a real alternative to Scopus, as the greatest bibliographic database, organized by Elsevier (Amsterdam, The Netherlands) and is more focused on the European region. Neither of these two databases takes into account the monographs themselves, which are the primary source of information, as well as textbooks or student scripts.

National journals are very important for the national scientific community. They should primarily serve for training, with brief information for the scientific community; of course, they should also publish their original works. The publisher and the scientific community should strive to include it in international databases, especially Scopus or the Web of Science.

There are also some new scientometric techniques for estimating journals and scientists- citation density, citation half-life, Erdos number (mostly used by mathematicians), or the h-Index. All these new instruments use sophisticated statistical and mathematical processes.

Journal Citation Reports Science Edition, covers over 6000 journals, and approximately 150 disciplines. However, the use of impact factors, especially the so-called. standard or Garfield impact factor, as one of the basic indicators in the evaluation of one’s work, points to the conclusion that it is a matter of not understanding its true meaning. The impact factor of a particular journal is a measure of the frequency with which the “average article” is cited in a journal over a period of time. The impact factor of a journal can only be a potential indicator of the value of an article, because it is assumed that it has undergone a strict review procedure, and the true value of that article is obtained a posteriori, that is, by the number of citations and the potential impact of that work on the value of the impact factor of the journal.

The quality of published results of scientific work largely depends on the sources of knowledge used in the preparation, which means that it should be taken into account that it serves the purpose and the relevance of the information used.

SCImago Journal & Country Rank, which are Internet portals that publish indicators on journals from countries in different parts of the world, make a valuable contribution to the ranking of journals in terms of their qualitative contribution to scientific research.44 This instrument for measuring scientific competitiveness globally has been developed based on the sources of information contained in Scopus Data.

H-Index

Evaluation of scientific productivity and evaluation of the published work of researchers and scientists can be done through the so-called h-Index.1 This index is calculated based on a list of publications ranked in the order in which they were cited. The value of this index is equal to the number of documents (N), in a list that has N or more citations.

The h–Index is defined as the ratio of the number of articles and the number of citations that is equal to or higher than the number of articles. This index should be used to assess persons joining university staff or prestigious societies. The value of the h index of 10–12 is a sign for prestigious universities for permanent employment without re-election. For membership in the American Physical Society, an x index of 15–20 is required, and for membership in the US National Academy of Science above 45.1

Physicist Jorge E. Hirsch is aware of the shortcomings of current indicators of scientific productivity evaluation, number of published articles and echoes measured through the total number of citations, average number of citations per article, number of articles with the above average number of citations, potential values of articles published in journals with certain IF, introduced an indicator that can measure the wider echo and more recognizable impact of the work of an individual scientist, that is, a journal.28-30 He suggested only one number, the “h-Index,” as a simple and useful way to characterize a researcher’s scientific activity.27 A scientist has a certain x-index if each of his Np articles received at least x citations, while the other (Np - h) articles have ≤h citations. In practice, this means that if an author has an h-Index of 10, then he has published 10 or more articles, with his 10 articles receiving at least 10 citations, while his other articles have been cited less than 10 times. The total number of citations, in this case, can be at least 100.28-30

The h-Index, as a scientometric indicator, basically serves to compare scientists only from the same field and approximately the same work experience, and the same statement applies to journals. Namely, two scientists with similar h-indices are comparable in terms of their overall scientific productivity and repercussions, even if their total number of articles and citations is very different. That is, comparing two scientists (approximately the same work experience) with a similar number of published articles and/or a similar total number of citations, but different h-Indexes, speaks in favor of “greater recognition” of scientists with a high h-Index.27

The h-Index combines in a specific and balanced way the effects of “quantity” (number of published articles) and “quality” (number of citations)1,27,29 and some of the authors Masic, Jankovic, Braun, Batista etc.1,18,27 believe that the h-Index has several advantages. It combines productivity with echo, is not sensitive to extreme values in terms of articles without citations or articles with an above-average number of citations, and directly allows the identification of the most relevant articles in terms of the number of citations received. It is not uncommon for a scientist to publish several important articles and for these articles to receive an extremely large number of citations, however, his h-Index is not particularly high.

It is often the case that scientists with a high h-Index work as a team and publish articles with a large number of authors (greater than 50) and that they cite each other, as is the case, for example, in the field of high-energy physics. A lot of authors (Masic, Jankovic, Batista, van Raan, etc.)11,18,23,24,47 warn that with the h-Index it is important to investigate the influence of the number of authors on the total number of citations. These authors proved that the greater the number of authors, the greater the number of self-citations, which can directly increase the h-Index, if self-citations are not excluded. On the other hand, it is important to keep in mind that for some narrower scientific fields, for example, which is still developing, self-citation is a logical and expected phenomenon.

When all of the above is taken into account, the h-Index basically defines the recognizability, that is, consistency of an individual scientist, that is, journal, in a certain area. so-called independent citations. Independent citations are quotations that the author receives from colleagues unknown to him outside his institution, and in the case of small countries, outside his own country.

As for other indicators of evaluation of scientific work, it is important for the h-Index when interpreting values to take into account not only the discipline or area but also the branches, as well as the topicality of the problem. Hirsch (Hirsch 2005),1 based on his calculations, proposes as a benchmark for the evaluation of physicists of the world’s leading research universities for promotion to associate professor h = 12, for full professor h = 18, and membership in the National Academy of Sciences of the United States. Academy of Science of the United States of America, average h = 45, with some exceptions. He suggests that as a measure of a successful physicist, with 20 years of research, the h-Index be 20, while an h-Index of 40 indicates “an outstanding scientist in an extremely successful laboratory.” He also cites examples of Nobel laureates, whose h-Index values range from 70–90. The average h-Index of Nobel Prize-winning physicists in the 20 years 1985–2005.1

According to Hirsch, the top 10 scientists in the field of biosciences, in the period 1983–2002, had a median h-Index of 57, which is significantly more than for physicists. But the biosciences are too broad a field to be easily compared to the h-Index of a molecular biologist and a biologist dealing with ecology, biodiversity, floristics, or zoology. Cronin and Meho1,2,8 conducted a study comparing the h–Index and the total number of citations for the field of information sciences. They analyzed the 31st scientist with the most citations from the US School of Information Science, 1999–2005. year, according to SSCImost cited IS scholarships. The range values of their h-indices ranged from 5–20, with the fact that they excluded self-citations. They proved that there is a positive correlation between the h-Index and the number of citations, which suggests that the total number of citations is indeed a reliable indicator of the repercussions and impact of the works of individual scientists. The mean value of the h-Index for information sciences was 11. Oppenheim analyzed British scientists in the field of library and information sciences and obtained a mean value of the h-Index 7.1,2,8

The scientific community has shown great interest in the h-Index as a scientometric indicator, so Scopus and soon WoS citations, in addition to the number of articles, number of citations, average number of citations, offer automatic calculation of the h-Index including all types of citations.6,12

In addition to authors, the h-Index has become increasingly used as an indicator for journal evaluation. Based on all the presented facts, the h-Index is certainly one of the indicators that contribute to the overall assessment of the scientific work of an individual scientist, institution, field, journal, etc.27,28 It would not be good to look at it separately, that is, independently of the subject area, the length of the scientist’s working life, scientific productivity, coauthorship, the total number of citations, and the type of citations and other relevant parameters.

There are three bibliometric databases for analyzing and evaluating citations via h- Index: Web of Science (Thomson Reuters), Scopus (Elsevier), and Google Scholar. Although Google Scholar and Scopus appear to provide a larger number of citations, there is mixed information about them.40-46

Bibliometric Stanford List of Most-cited Authors in Scopus Database

On 4 December 2021 Sarajevo held Symposium titled “Scientometry, Citation, Plagiarism and Predatory in Science Publishing”.18,41-43 Symposiums were based on interpretations of the bibliometric Stanford list published in October of 2020 in the journal PLOS Biology, which brings up the question of the credibility of the data in the media and that the Stanford list may have been misinterpreted. Participants of the Symposium concluded that “the data must be analyzed more seriously and possibly argued for their accuracy and credibility.”42

The original title of the paper with the Stanford list is: “Updated science-wide author databases of standardized citation indicators,” published by Elsevier (Amsterdam, the Netherlands), by John P. A. Ioannidis, Kevin W. Boyack, and Jeroen Baas, professors at the University of Stanford in California (USA).42 The authors of the study state that the influence of world scientist citations is often misinterpreted, and to achieve maximum objectivity, they created a publicly available database with more than 190,000 leading scientists of the world. Using the principles of artificial intelligence that deal with algorithm design, the authors correlated several parameters that, in their opinion, are important for the objective evaluation of each scientist. They especially emphasized the importance of distinguishing between the concepts of the number of citations and their impact. The available database contains standardized information on citations, h-Index, hm-index, citations of articles in different positions of authors/coauthors in the analyzed article, and a summary indicator of the impact of citations. Scientists are classified into 22 scientific fields and 176 scientific branches. For all scientists who have published at least five articles, percentages specific to the scientific field are given. Collective data for each author/coauthor were analyzed and updated from the beginning of the career until the end of 2020. The selection is based on the first 190,000 according to the c-score (with and without self-citations) or on the percentage range of 2% of the most cited. The methodology used during the preparation of the list of scientists with the greatest impact on citations was published in the scientific journal PLOS Biology in 2020.18

Speaking about the Stanford list, circulating in the scientific community, “we have agreed that it is necessary to suggest that scientometric analysis with the method used by authors from Stanford University in the USA should take into account two very important variables:18,41 each author’s contribution when there are co-authors of the article, so the number of citations from the total number of authors should be divided by each coauthor individually, and not for each co-author to receive a citation as if they were the first; and41 it is necessary to take into account the evaluation of the quality of the content published in the research results in the paper published and stored in the index databases. Only then would the Stanford list be more complete and of better quality. In that case, perhaps half of the authors from that list would be dropped out, especially if the numbers of citations as the first author or as a co-author were singled out.” The list is misleading mostly because many publications have been excluded and the number of citations for each author was not divided by the number of authors per article. Only after these corrections, it would be realistic, but then half of the authors would drop out of the existing list.18 The authors who created the Stanford scientometric list of the most cited authors from articles stored in the Scopus bibliographic database methodologically took into account whether someone was the first, last, or only author, and the like, and did so in great detail. Unfortunately, they did not take into account the number of authors per article. Then, they looked at the number of citations according to Scopus, and half of our citations are missing there (there are almost twice as many on ResearchGate).

Also, criteria for assessment of the scientific status of somebody who built-up scientific or academic career, besides the mentioned indexes in this text, must take into account also authorship of the textbook(s), books, monographs, etc.; the proof of organized congresses or scientific conferences or chaired of scientific sessions at conferences, etc.; editing of scientific indexed journals recognized internationally, membership in scientific associations at international or national levels, some special awards at the international level, etc. These criteria should be important for the quality assessment of the scientific curriculum of scientists. Current academies and academicians can propose it with the consultation of scientific bodies and experts at universities in one country, selected regions, or worldwide.41

PLAGIARISM AND HOW TO AVOID IT

The biggest problem that the participants in the academic process meet is plagiarism. Plagiarism is defined as “the intentional or unintentional copying of the words of others” (Fig. 4).31 Inventing and repeating the results are not so rare due to their transparent nonsense and do not contribute to science and work. Plagiarism, is one of the most dishonest forms of scientific fraud, it is possible at all stages of the study. The problem of plagiarism represents one of the burning issues of the modern scientific world. A particularly important problem in publishing and generally in scientific research is plagiarizing others’ ideas, articles, research, etc. Plagiarism is copying from others’ works and illegal taking of spiritual ownership.31-39 Plagiarism is the illegal use of spiritual ownership, or any use of other people’s ideas, opinions, or theories, either literally or paraphrased, when the author or the source of information is not listed. Such a “copy-paste” act constitutes a theft of authorship, which is completely unacceptable in scientific, professional, and student works. The problem of plagiarism will become a most discussed topic in the modern scientific world, especially due to the development of standard measures, which rank the work of one author. Investment in education, and education of young research personnel about the importance of scientific research, with paying particular attention to ethical behavior, becomes imperative for academic staff.39 In the wider academic community, plagiarism is a serious breach of ethical standards and implies accountability with a disciplinary sanction.35 This is one of the most common ways of compromising the academic integrity of the author and cause of constant conflict in the students–teacher relation. Copy, use, or other exploitation of other people’s ideas, words, or creations, without quoting sources in the appropriate form is forbidden.32 When writing papers it is possible to use other people’s words and ideas, but with mandatory labeling and listing sources from which these words and ideas are taken. People who read the article can even by the very sentences recognize what is written as original work or just taken part of the other’s text. The references citation, as an essential part of any scientific and technical article, contributes to the quality, talking about the sources and thus the depth of information on the subject to which the article is dedicated. Detection of plagiarism could be solved using software solutions. Editors have to invest additional effort in the development of the base of the reviewers team as well as in their proper guidance, because after all, despite the software solutions, they are the best weapon to fight plagiarism. The peer review process should be a key to the successful operation of each journal.39

Plagiarism, as a way of scientific misuse, dates way back to the communication itself. According to the World Association of Medical Journals plagiarism is to take a series of six words31 or from 7–11 words or an overlapping set of 30 letters.37 Although variously defined plagiarism is basically intended to deceive the reader’s own scientific contribution.

Plagiarism can be divided into:31

Plagiarism is using others’ ideas, words of inspiration, statements, and linguistic styles as a result of new and original without citing sources from which they were taken.32 “Before the plagiarist taken as their own people’s tables or text, today there is a growing interest in taking the ideas and concepts.”31 Since we live in a time of digital revolution (copy/paste option) significantly facilitated plagiarism from the Internet, CD-ROM, online journals, and other electronic information sources.

Since plagiarism is present also among scientists, scientific journals of high quality to intellectual dishonesty pay a lot of attention. Such intellectual dishonesty and breaches of high ethical principles of science is the fact that the base for the evaluation of the quality of work of scientists’ authorship published work. This behavior belongs to the author, so intellectual dishonesty gray area that is unethical and requests sanctions.32

Reputable international journal editors from different scientific fields established association COPE (Committee on Publication Ethics), which, in addition to promoting the scientific principles of fairness, provides guidelines and suggestions for the editors of scientific journals on the procedure in cases of suspected dishonorable actions in published studies and papers submitted for publication.

BIBLIOGRAPHIC INSTRUMENTS FOR SCIENTIFIC KNOWLEDGE EXCHANGE

For scientists and researchers to communicate with colleagues with whom they share common professional interests, there are specialized databases (scientific platforms) such as ResearcherID (www.researcherID.com), Scholaruniverse (www.scholaruniverse.com), and the like.

ResearcherID is a database that, using a unique identification number, allows authors to manage a list of their works. In this way, they can identify potential contributors and potentially misidentify other authors, including determining the h-Index. The Marquis publishing house, based in Beverly Heights, New Jersey, has been publishing biographical literature of various profiles, including that relating to academics, for decades. Details of the biographical products are available by visiting the website: (www.marquiswhoswho.com).1

Historically, scientific biographies are significant in a publishing endeavor, such as The Dictionary of Scientific Biography (DSB). ICT has enabled an automated process of quoting sources through something called a “Citation machine,” among which are: Mendele (www.mendeley.com), EndNote, RefWorks, Zotero, Citationmachine, and the like.40

SCIENTIFIC COMPETITION AND THE STATUS OF SCIENTISTS

Science and technology have a key role to play in the development of modern society and scientific research if they are based on ethical principles.18,44-50 Science and scientists can provide answers to the key questions of today, which man encounters in everyday life, which is provable in the global solution of the COVID-19 pandemic, but also the consequences of economic underdevelopment, climate change, which world states are discussing at conferences, global the problem of hunger, which is epidemically present in some parts of the world (Africa in particular). The importance of the social and economic status of scientists for their full engagement in the field of scientific research is especially important, and especially affects science and scientists in Bosnia and Herzegovina.48-52 Investment in science and thus in the status of scientists has long been on the verge of a minimum (measured by European standards) and has been reduced to a level of mere survival. Such a modest status of science and scientific institutions in BiH is reflected in the citation of scientists in the Balkan, European, and world spheres and databases that we mentioned in this text. No one in BiH adheres to the Recommendation on the Status of Scientific Research from 1974 in the European Charter for Researchers, which, among other things, advocates, in addition to the need to establish a favorable working environment for scientists and adequate professional status, to -research workers must have freedom in their activity. The Charter, entitled “Research Freedom,” states that: “Researchers should focus their research on the well-being of humanity and on expanding the boundaries of scientific knowledge, while enjoying the freedom of thought and expression and the freedom to identify methods to address scientific issues, and under-recognized ethical standards.” A careful reading of the text of the European Charter for Researchers also shows the Code of Employment of Researchers.1.

CONCLUSION

Since research in medicine can affect the improvement of clinical and public health practices, it is necessary to conduct them. To be considered a significant scientific work, they must be conducted according to certain rules and guided by the steps presented in my work. Only quality research with exact results offers the scientific community new information about the examined problem, the researcher’s satisfaction, the possibility of communicating and conducting scientific dialogue with other members of the academic community, and opening opportunities to receive a critical review of those who have insight into the research.

The Board of the World Association of Medical Advisers (WAME) has made the following recommendations for WAME members.1

Starting from the current situation in the field of quantitative research and changes caused by the strong development of information and communication technologies (ICT), it can be stated: (1) That the existing disciplinary division is inexpedient because it did not lead to ordering scientific areas; (2) That evidence of questionable validity has been used and is still used in the creation of differences as a criterion for division; (3) That the establishment of a single discipline is a necessary precondition for further development, especially after the changes caused by the emergence of webometry, that is, scientometry that favored ICT, especially the Internet, that is, Internet web platforms that help deposit publications in appropriate databases for their search.

The fact is that scientometrics and online databases have a great influence on the development of the quality of the articles by measuring scientific contents of published articles using IF, Scopus h-Index, Google Scholar Index, etc., which today ask every academic or scientific institution when making the election in some of the academic or scientific title.

In this article, we pointed out that the h-Index presents one of a set of valuable measures to determine scientific excellence (bibliometrics recognize also m-value as useful). Although the h-Index is a better measure than a citation Impact factor (IF), it is still based on the opinions of other authors. In the cases when somebody wants to compare or assess the academic or scientific quality of applicants for funding, promotions to some academic title, or prizes, other factors must be considered. Other parameters must be included as age, career stage, a field of a scientist, awards, chair of the projects, etc.

AUTHOR’S CONTRIBUTION

Author was involved in all steps of preparation this article, including final proofreading.

ORCID

Izet Masic https://orcid.org/0000-0002-9080-5456

REFERENCES

1. Masic I, Kujundzic E. Sciences Editing in Biomedicine and Humanity. Avicena. Sarajevo, 2013.

2. Masic I. Medical publication and scientometrics. J Res Med Sci 2013;18(6):516–521. PMID: 24250704; PMCID: PMC3818625.

3. Masic I, Jankovic SM, Kurjak A, et al. Guidelines for editing Biomedical Journals: recommendation by Academy of Medical Sciences of Bosnia and Herzegovina. Acta Inform Med 2020;28(4):232–236. DOI: 10.5455/aim.2020.28.232-236

4. Masic I, Begic E, Donev MD, et al. Sarajevo Declaration on integrity and visibility of scholarly journals. Croat Med J 2016;57(6):527–529. DOI: 10.3325/cmj.2016.57.527

5. Masic I, Jankovic MS. The basic principles of editing biomedical scientific journals. Int J Biomed Healthc 2020;8(1):6–10. DOI: 10.5455/ijbh.2020.1.6-10

6. Masic I. Writing and editing of scientific papers using BOMRAD structured form and proper style of reference’s citation. Int J Biomed Healthc 2021;9(1):4–14. DOI: 10.5455/ijbh.2021.9.4-14

7. Ufnalska S. EASE guidelines for authors and translators of scientific articles. Int J Biomed Healthc 2020;8(2):129–130. DOI: 10.5455/ijbh.2020.2.129-130

8. Masic I. How to search, write, prepare and publish the scientific papers in the biomedical journals. Acta Inform Med 2011;19(2):68–79. DOI: 10.5455/aim.2011.19.68-79

9. Masic I, Sabzghabaee AM. How clinicians can validate scientific contents? J Res Med Sci 2014;19(7):583–585. PMID: 25364354; PMCID: PMC4214013.

10. Masic I. Peer review - essential for article and journal scientific assessment and validity. Med Arch 2016;70(3):168–171. DOI: 10.5455/medarh.2016.70.168-171

11. Masic I. The Malversations of Authorship - current status in academic community and how to prevent it. Acta Inform Med 2018;26(1):4–9. DOI: 10.5455/aim.2018.26.4-9

12. Masic I, Jankovic SM. Meta–analysing methodological quality of published research: importance and effectiveness. Stud Health Technol Inform 2020;272:229–232. DOI: 10.3233/SHTI200536

13. Masic I, Begic E. Meta–analysis as statistical and analytical method of journal’s content scientific evaluation. Acta Inform Med 2015;23(1):4–11. DOI: 10.5455/aim.2015.23.4-11

14. Masic I. Scientometric analysis: a technical need for medical science researchers either as authors or as peer reviewers. J Res Pharm Pract 2016;5(1):1–6. DOI: 10.4103/2279-042X.176562

15. Masic I, Begic E, Begic N. Validity of scientometric analysis of medical research output. Stud Health Technol Inform 2017;238:246–249. PMID: 28679935.

16. Gustavii B. How to Write and Illustrate Scientific Papers. 2nd ed. Cambridge University Press, Cambridge–New York, 2008: 1–2. 10, Jacobsen HK. Health Research Methods: a Practical Guide. Jones & Barlett Learning, LLC, Ontario, 2012:9–237.

17. Sengor AMC. How scientometry is killing science. Available at: https://www.geosociety.org/gsatoday/archive/24/12/pdf/i1052-5173-24-12-44.pdf. Accessed on November 20th, 2021.

18. Masic I, Kurjak A, Jankovic MS, et al. On Occasion of the 12th “Days of AMNuBiH 2021” and “SWEP 2021” Conferences, Sarajevo, Bosnia nd Herzgovina. Acta Inform Med 2021;29(4):295–310. DOI: 10.5455/aim.2021.4.295-310

19. Masic I, Kurjak A, Zildzic M, et al. On Occasion of the 11th “Days of AMNuBiH 2020” and “SWEP 2020” Conferences, Sarajevo, Bosnia and Herzegovina. Int J Biomed Healthc 2020;8(2):113–128. DOI: 10.5455/ijbh.2020.2.113-128

20. Masic I, Jakovljevic M, Sinanovic O, et al. The Second Mediterranean Seminar on Science Writing, Editing and Publishing (SWEP - 2018), Sarajevo, December 8th, 2018. Acta Inform Med 2018;26(4):284–299. DOI: 10.5455/aim.2018.4.284-299

21. Masic I, Donev D, Sinanovic O, et al. The First Mediterranean Seminar on Science Writing, Editing and Publishing, Sarajevo, December 2–3, 2016. Acta Inform Med 2016;24(6):424–435. DOI: 10.5455/aim.2016.6.424-435

22. Jankovic MS, Masic I. Importance of adequate research design in biomedicine. Int J Biomed Healthc 2019;7(2):64–66. DOI: 10.5455/ijbh.2019.2.64-66

23. Masic I, Jankovic MS. Inflated co-authorship introduces bias to current scientometric indices. Med Arch 2021;75(4):248–255. DOI: 10.5455/medarh.2021.75.248-255

24. Jankovic SM. Low sensitivity and specificity of existing bibliometric indices gives unrealistic picture of an author’s contribution to science. Acta Inform Med 2021;29(1):69–70. DOI: 10.5455/aim.2021.29.69-70

25. Jankovic SM, Masic I. Methodological errors in clinical studies published by medical journals of ex-Yugoslav countries. Acta Inform Med 2020;28(2):84–93. DOI: 10.5455/aim.2020.28.84-93

26. Masic I. The importance of proper citation of references in biomedical articles. Acta Inform Med 2013;21(3):148–155. DOI: 10.5455/aim.2013.21.148-155

27. Masic I. H-index and how to improve it? Donald School J Ultrasound Obstet Gynecol 2016;10(1):83–89. DOI: 10.5005/jp-journals-10009-1446

28. Masic I, Begic E. Scientometric dilemma: is h-index adequate for scientific validity of academic work? Acta Inform Med 2016;24(4):228–232. DOI: 10.5455/aim.24.228-232

29. Masic I. Index factors for assessing the scientific journal validity, its articles and their authors. J Forensic Anthropol 2016;1:03. DOI: 10.35248/2684-1304.16.1.103

30. Masic I. The most influential scientists in the development of medical informatics (17): Eugene Garfield. Acta Inform Med 2017;25(2):145–145. DOI: 10.5455/aim.2017.2.145-145

31. Masic I. Plagiarism in scientific publishing. Acta Inform Med 2012;20(4):208–213. DOI: 10.5455/aim.2012.20.208-213

32. Roig M. Avoiding unethical writing practice. Food Chem Toxicol 2012;50(10):3385–3387. DOI: 10.1016/j.fct.2012.06.043

33. Amstrong JD. Plagiarism – what is it, whoom does it offend and how does one deal with it? AJR Am J Roentgenol 1993;161(3):479–484. DOI: 10.2214/ajr.161.3.8352091

34. Luscher FT. The codex of science: honesty, precision, and truth – and its violations. Eur Heart J 2013;34(14):1018–1023. DOI: 10.1093/eurheartj/eht063

35. Masic I, Hodzic A, Mulic S. Ethics in medical research and publication. Int J Prev Med 2014;5(9):1073–1082. PMID: 25317288; PMCID: PMC4192767.

36. Masic I. Ethical aspects and dilemmas of preparing, writing and publishing of the scientific papers in the biomedical journals. Acta Inform Med 2012;20(3):141–148. DOI: 10.5455/aim.2012.20.141-148

37. Masic I. Plagiarism in scientific research and publications and how to prevent it. Mater Sociomed 2014;26(2):141–146. DOI: 10.5455/msm.2014.26.141-146

38. Arafat Yasir SM. Plagiarism: an important research misconduct. J Workplace Behav Health 2017;6(2):73–75. DOI: 10.5455/jbh.20170123102746

39. Mohammadali M. Shoja, Anastasia Arynchyna (Eds.). A Guide to the Scientific Career. Willey Blackwell. London, 2019: 163–178 (Chapter 19).

40. Masic I, Milinovic K. On-line biomedical databases–the best source for quick search of the scientific information in the biomedicine. Acta Inform Med 2012;20(2):72–84. DOI: 10.5455/aim.2012.20.72-84

41. Masic I. On the Occasion of the Symposium “Scientometrics, Citation, Plagiarism and Predatory in Scientific Publishing”, Sarajevo, 2021. Med Arch. 2021;75(6):408–412. DOI: 10.5455/medarh.2021.6.408-412

42. Available at: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000918. Accessed on: November 25th, 2021.

43. Available at: https://elsevier.digitalcommonsdata.com/datasets/btchxktzyw/3. Accessed on: November 25th, 2021.

44. Available at: https://www.scimagojr.com/countryrank.php. Accessed on: November 25th, 2021.

45. Masic I, Jankovic MS. Why registering your research study involving human subjects before its onset? Int J Biomed Healthc 2020;8(2):64–67. DOI: 10.5455/ijbh.2020.2.64-67

46. Masic I, Jankovic MS. Comparative analysis of Web of Science and Pubmed Indexed Medical journals published in former Yugoslav countries. Med Arch 2020;74(4):252–264. DOI: 10.5455/medarh.2020.4.252-264

47. Jankovic SM, Masic I. Evaluation of preclinical and clinical studies published in medical journals of Bosnia and Herzegovina: methodology issues. Acta Inform Med 2020;28(1):4–11. DOI: 10.5455/aim.2020.28.4-11

48. Masic I. Bosnian and Herzegovinian medical scientists in PubMed database. Med Arch 2013;67(2):147–150. DOI: 10.5455/medarh.2013.67.147-151

49. Masic I. Evaluation of the Medical Academic Community of Bosnia and Herzegovina based on scopus parameters. Med Arch 2017;71(3):164–168. DOI: 10.5455/medarh.2017.71.164-168

50. Balogun J, Mamuzo E, Okonofua F, et al. Bibliometric profile of the African Academy of Sciences medical and health sciences fellows. Pan Afr Med J 2021;38:60. DOI: 10.11604/pamj.2021.38.60.21004

51. Lelo S, Zujo Zekic D, Kasic Lelo M. “H-indeks” kao pokazatelj naučne uspješnosti istraživača Univerziteta u Sarajevu. EDUCA, 13(13),157–160.

52. Ufnalska S. Ten years of EASE guidelines. Int J Biomed Healthc 2020;8(2):4–5. DOI: 10.5455/ijbh.2020.1.4-5

________________________
© The Author(s). 2022 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.