VerifyThis – Verification Competition with a Human Factor

Begutachtete Veröffentlichung in Tagungsband

Autor(en):Gidon Ernst, Marieke Huisman, Wojciech Mostowski und Mattias Ulbrich
In:25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019)
Verleger:Springer
Jahr:2019
Seiten:176-195
DOI:10.1007/978-3-030-17502-3_12

Abstract

VerifyThis is a series of competitions that aims to evaluate the current state of deductive tools to prove functional correctness of programs. Such proofs typically require human creativity, and hence it is not possible to measure the performance of tools independently of the skills of its user. Similarly, solutions can be judged by humans only. In this paper, we discuss the role of the human in the competition setup and explore possible future changes to the current format. Regarding the impact of VerifyThis on deductive verification research, a survey conducted among the previous participants shows that the event is a key enabler for gaining insight into other approaches, and that it fosters collaboration and exchange.

BibTeX

@InProceedings{ErnstEtAl2019,
    author      = {Gidon Ernst and
                   Marieke Huisman and
                   Wojciech Mostowski and
                   Mattias Ulbrich},
    editor      = {Dirk Beyer and
                   Marieke Huisman and
                   Fabrice Kordon and
                   Bernhard Steffen},
    title       = {{VerifyThis} -- Verification Competition with a Human Factor},
    booktitle   = {25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems ({TACAS} 2019)},
    year        = {2019},
    publisher   = {Springer},
    address     = {Cham},
    month       = apr,
    eventdate   = {2019-04-06/2019-04-11},
    venue       = {Prague, Czech Republic},
    pages       = {176--195},
    abstract    = {VerifyThis is a series of competitions that aims to evaluate the current state of deductive tools to prove
                   functional correctness of programs. Such proofs typically require human creativity, and hence it is not possible
                   to measure the performance of tools independently of the skills of its user. Similarly, solutions can be judged
                   by humans only. In this paper, we discuss the role of the human in the competition setup and explore possible
                   future changes to the current format. Regarding the impact of VerifyThis on deductive verification research,
                   a survey conducted among the previous participants shows that the event is a key enabler for gaining insight into
                   other approaches, and that it fosters collaboration and exchange.},
    isbn        = {978-3-030-17502-3},
    doi         = {10.1007/978-3-030-17502-3_12}
}