Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

On evaluating the quality of a computer science/computer engineering conference

Loizidis Orestis-Stavros, Koutsakis Polychronis

Simple record


URIhttp://purl.tuc.gr/dl/dias/49609B29-6374-4A1C-989A-D0056E0A64A9-
Identifierhttps://www.sciencedirect.com/science/article/pii/S1751157716301808?via%3Dihub-
Identifierhttps://doi.org/10.1016/j.joi.2017.03.008-
Languageen-
Extent12 pagesen
TitleOn evaluating the quality of a computer science/computer engineering conferenceen
CreatorLoizidis Orestis-Stavrosen
CreatorΛοϊζιδης Ορεστης-Σταυροςel
CreatorKoutsakis Polychronisen
CreatorΚουτσακης Πολυχρονηςel
PublisherElsevieren
Content SummaryThe Peer Reputation (PR) metric was recently proposed in the literature, in order to judge a researcher's contribution through the quality of the venue in which the researcher's work is published. PR, proposed by Nelakuditi et al., ties the selectivity of a publication venue with the reputation of the first author's institution. By computing PR for a percentage of the papers accepted in a conference or journal, a more solid indicator of a venue's selectivity than the paper Acceptance Ratio (AR) can be derived. In recent work we explained the reasons for which we agree that PR offers substantial information that is missing from AR, however we also pointed out several limitations of the metric. These limitations make PR inadequate, if used only on its own, to give a solid evaluation of a researcher's contribution. In this work, we present our own approach for judging the quality of a Computer Science/Computer Engineering conference venue, and thus, implicitly, the potential quality of a paper accepted in that conference. Driven by our previous findings on the adequacy of PR, as well as our belief that an institution does not necessarily “make” a researcher, we propose a Conference Classification Approach (CCA) that takes into account a number of metrics and factors, in addition to PR. These are the paper's impact and the authors’ h-indexes. We present and discuss our results, based on data gathered from close to 3000 papers from 12 top-tier Computer Science/Computer Engineering conferences belonging to different research fields. In order to evaluate CCA, we compare our conference rankings against multiple publicly available rankings based on evaluations from the Computer Science/Computer Engineering community, and we show that our approach achieves a very comparable classification.en
Type of ItemPeer-Reviewed Journal Publicationen
Type of ItemΔημοσίευση σε Περιοδικό με Κριτέςel
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2018-05-14-
Date of Publication2017-
SubjectAuthor affiliationsen
SubjectConference evaluationen
Subjecth-indexen
SubjectPaper impacten
Bibliographic CitationO.-S. Loizides and P. Koutsakis, "On evaluating the quality of a computer science/computer engineering conference," J. Informetr., vol. 11, no. 2, pp. 541-552, May 2017. doi: 10.1016/j.joi.2017.03.008en

Services

Statistics