Logo



ISSN 2364-3641



Publications:

overview


direct access:

1 (2015)

2 (2015)

3 (2015)

4 (2016)

5 (2016)

6 (2017)

7 (2017)

8 (2018)

9 (2018)

10 (2020)

11 (2020)

12 (2020)

 

vpl-reports.de

Open Access

| home | evaluation | archive | vpl-goettingen.de | impressum |   

The evaluation system used here (updated Jan 02, 2020)

Publications in this journal are not peer-reviewed, at least not with the usual peer-reviewing process in which 2-3 referees evaluate a manuscript before publication. But to give you an idea of what other readers think about the papers and let you yourself become a referee, I introduced a post-publication reviewing system in which all readers may rate the papers or comment on them.

That was the original idea. To prevent misuse of the comment function I asked people to leave his or her real name together with the comment -- and have received no comments so far (which I understand; I would also hesitate to published an ad hoc comment without having read the paper carefully, and time is always short).

Therefore, and to allow for spontaneous evaluations, I introduced a rating system in which everyone could make a rating between 1 (poor) and 5 (excellent). To avoid repetitive ratings from the same reader, I had blocked IP addresses for a certain time after each rating. During that time readers could change their ratings but could not accumulate hundreds of ratings on the same paper. That seemed to work fine for quite a while. Since a few months, however, it appears that in particular this restriction became a major challenge for certain visitors. Some made hundreds of ratings within a few seconds (faster than could be entered by mouse button clicks); these attemtps were successfully blocked by the implemented algorithm and repetitive ratings were not counted. But several visitors made repetitive ratings with different IP addresses, which then were indeed counted several times. At the end I had many more ratings with some publications than counted downloads of that paper.

I don't know the intentions behind these "attacks". May be they were simple tests if someone might be more clever than my algorithm. May be it was an attempt to create certain patterns in the rating graphs. Even the offer of a special "playground for test raters" did not stop the increasing misuse of the rating system, so that I finally decided to remove that "goody" from my website. As stated elsewhere, my primary interest is in research, not in the development of tricky algorithms to stop readers from trying to be more tricky.

Advantages and disadvantages of rating systems

If you are a working scientist, you must have been thinking about the advantages and disadvantages of certain evaluation systems. In the usual reviewing process, the publication of a new manuscript is based on comments of a few referees from the peer. Among the clear benefits of this system is the discovery and removal of (some) errors and often enough an improvement of the clarity of presentation. But this system also has disadvantages. One is a tendency of the peer community to reproduce common meanings and to restrict research to main stream ideas and models. (Did you ever think of Galileo Galilei's findings in a standard peer reviewing process those days?) Another disadvantage is the limitation to very few people, all certainly experts but likely not all experts in exactly the topic of what the manuscript is about. The latter restriction might be overcome in a post-publication rating system in which (if taken seriously) all experts from the field may evaluate the paper and make their comments.

A (merely organisational) restriction is that many science indexes, pubmed, WebofScience and others, list exclusively papers from peer-reviewed journals. Given that certain new ("predatory") journals tend to use peer reviewing in a rather careless way just to earn many publication fees, that restriction does not seem to be adequate anymore. Papers from these journals although perhaps not properly evaluated will be listed in the science indexes while other papers even when highly rated by the community will not. I think that this decision needs a revision and that other, more appropriate quality indicators than the usual peer reviewing process should be developed.

 

   © 

 

   ©