Ιδρυματικό Αποθετήριο
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

On evaluating the quality of a computer science/computer engineering conference

Loizidis Orestis-Stavros, Koutsakis Polychronis

Απλή Εγγραφή


URIhttp://purl.tuc.gr/dl/dias/49609B29-6374-4A1C-989A-D0056E0A64A9-
Αναγνωριστικόhttps://www.sciencedirect.com/science/article/pii/S1751157716301808?via%3Dihub-
Αναγνωριστικόhttps://doi.org/10.1016/j.joi.2017.03.008-
Γλώσσαen-
Μέγεθος12 pagesen
ΤίτλοςOn evaluating the quality of a computer science/computer engineering conferenceen
ΔημιουργόςLoizidis Orestis-Stavrosen
ΔημιουργόςΛοϊζιδης Ορεστης-Σταυροςel
ΔημιουργόςKoutsakis Polychronisen
ΔημιουργόςΚουτσακης Πολυχρονηςel
ΕκδότηςElsevieren
ΠερίληψηThe Peer Reputation (PR) metric was recently proposed in the literature, in order to judge a researcher's contribution through the quality of the venue in which the researcher's work is published. PR, proposed by Nelakuditi et al., ties the selectivity of a publication venue with the reputation of the first author's institution. By computing PR for a percentage of the papers accepted in a conference or journal, a more solid indicator of a venue's selectivity than the paper Acceptance Ratio (AR) can be derived. In recent work we explained the reasons for which we agree that PR offers substantial information that is missing from AR, however we also pointed out several limitations of the metric. These limitations make PR inadequate, if used only on its own, to give a solid evaluation of a researcher's contribution. In this work, we present our own approach for judging the quality of a Computer Science/Computer Engineering conference venue, and thus, implicitly, the potential quality of a paper accepted in that conference. Driven by our previous findings on the adequacy of PR, as well as our belief that an institution does not necessarily “make” a researcher, we propose a Conference Classification Approach (CCA) that takes into account a number of metrics and factors, in addition to PR. These are the paper's impact and the authors’ h-indexes. We present and discuss our results, based on data gathered from close to 3000 papers from 12 top-tier Computer Science/Computer Engineering conferences belonging to different research fields. In order to evaluate CCA, we compare our conference rankings against multiple publicly available rankings based on evaluations from the Computer Science/Computer Engineering community, and we show that our approach achieves a very comparable classification.en
ΤύποςPeer-Reviewed Journal Publicationen
ΤύποςΔημοσίευση σε Περιοδικό με Κριτέςel
Άδεια Χρήσηςhttp://creativecommons.org/licenses/by/4.0/en
Ημερομηνία2018-05-14-
Ημερομηνία Δημοσίευσης2017-
Θεματική ΚατηγορίαAuthor affiliationsen
Θεματική ΚατηγορίαConference evaluationen
Θεματική Κατηγορίαh-indexen
Θεματική ΚατηγορίαPaper impacten
Βιβλιογραφική ΑναφοράO.-S. Loizides and P. Koutsakis, "On evaluating the quality of a computer science/computer engineering conference," J. Informetr., vol. 11, no. 2, pp. 541-552, May 2017. doi: 10.1016/j.joi.2017.03.008en

Υπηρεσίες

Στατιστικά