Research Group Security • Usability • Society (SECUSO)

Cast-as-Intended Verifiability

Internet voting continues to generate great interest, with a recent survey in Germany finding that more than 50% of eligible voters would cast their vote over the Internet for federal elections. Despite this interest, security experts have expressed concern over the integrity of Internet voting. Verifiability offers some assurance of the integrity of votes cast in an election - both in traditional as well as in Internet-based elections. Voters however, have to carry out additional steps to verify the integrity of their individual votes cast in an election — the so-called cast-as-intended verifiability. The research group investigated the human factors that affect the ability of the voters to perform these steps in a number of works.

Mental models

Studies report that voters are confused about the concept and motivation for verifiability. This confusion shows the need to investigate mental models of verifiability in voting to base future communication of verifiability on these models. The research group therefore conducted an online study aiming to identify mental models of and terms for verifiability. The mental models were defined as ‘voters’ knowledge, beliefs and attitudes of verifiability as they cast votes in postal voting and paper-based voting at the polling station’. The participants in the study were asked the following questions:

  • how they could tell that their individual postal vote was not modified or removed on its way to the town hall and into the ballot box,
  • how they could tell that their postal or paper vote was not modified or removed from the ballot box,
  • how they could tell that their postal or paper vote was included in the final tally, 
  • how they could tell that all postal or paper votes were included in the final tally.

The answers were analysed using the open coding approach. From the analysis, five mental models have been defined: Trusting, No Knowledge, Observer, Personal Involvement and Matching. 

More information about the results of the analysis can be found here.

Usability of Benaloh Challenge

A widely-adopted technique for supporting cast-as-intended verifiability is the so-called Benaloh Challenge, which is implemented by several voting schemes, most notably, in the Helios voting system used for a number of legally-binding elections. When this challenge has been implemented, voting proceeds as follows. Voters commence voting by making their choice, which the voting client then encrypts. The Benaloh Challenge subsequently gives voters two options: (1) to vote, or (2) to verify. In the first case they cast the encrypted vote. In the second they verify that the encrypted vote accurately reflects their expressed choice. Because verification, as implemented by the Benaloh challenge, is not compatible with vote secrecy, verified votes must be discarded. Verification is performed with the assistance of a so-called verifier: software running either on the voting or supplementary device, such as a smartphone. The research group has conducted a number of studies focused on the evaluation of the Benaloh challenge as implemented in Helios as well as proposed and evaluated a number of improvements.

 

Cognitive Walkthrough

The research groups has analysed the usability of the Benaloh challenge as implemented in Helios via cognitive walkthough. Cognitive walkthrough is a usability inspection technique that uses exploration by experts in the field to evaluate a design for the ease of its learning. Security, electronic voting and usability experts inspected the Helios user interface by going through a fictitious university president election and evaluating the understandability. The focus of the analysis has been the voter’s interaction with the system and in particular on ballot casting combined with verifiability mechanisms to verify whether the ballot is cast as intended. During several sessions, each step of the ballot casting and verification process was carefully considered from the point of view of the voter, analyzing the steps they would have to follow and the instructions and clues that would help them along the way. The goal was to capture the functionality provided to the voter in each step and assess whether the provided instructions support the voter to decide which functionality to use and to understand the corresponding next step.

The assessment revealed a number of challenges voters would face and how these challenges affect their ability to cast and verify  a ballot. These findings are divided into three categories: general usability, verifiability procedure and usable security findings. In the first category are the findings that result from the Helios implementation being a work in progress. These are easy to correct. Those in the remaining two categories are more challenging to resolve as average voters are not used to verifiability and may not be familiar or comfortable with computer security issues. Overall, the assessment has shown that the Helios interface is not yet ready for use by average voters. Specifically, the cast-as-intended verifiability process is long and tedious and may result in a voter either not verifying their vote, or failing to complete the vote casting process. 

More information about the study and the results of the analysis can be found here.

 

Improvements and User Study

Based on the findings from the cognitive walkthrough, a new version of the Helios interface was proposed to improve the usability of the vote casting and cast-as-intended verifiability process. The major change in the new proposal is automating parts of the process by involving external institutions in the verification process or shortening the verification code, hence, referred to as automatic approach. Further modifications include shortening of the verification code and using more consistent and understandable wording.

In order to evaluate the proposed new interfaces, a preliminary user study with 34 participants has been undertaken. The main purpose of the user study was to evaluate whether the vote casting and the individual verification step in terms of cast as intended was user friendly and whether all voters who want to verify are able to do so. The election used was a mock mayoral election in Darmstadt. The study was conducted as a lab test as opposed to a field study, and an eye-tracking device was used to check whether participants really verified the codes output to them by the voting system. The results of the study show that the user friendliness of the system has reached an acceptable level and in particular has been improved compared to the manual approach and that most of the participants were able to verify their votes using the improved interfaces. However, most participants neither understood the need for verification nor the reason why the final vote to be cast could not be verified. The whole idea seemed to scare them regarding voters' privacy. The participants furthermore complained about the verification process being too cumbersome.

More information about the study and the results of the analysis can be found here.

 

Proposals for Alternative Processes and Their Evaluation

Following of the evaluation of the improvements to the Helios interface, another proposal to ease the verification process has been made, the so-called mobile approach. The approach relies on a smartphone app, which would be developed by several trusted institutes. The votes interact with the app to transfer the verification data output to them by the voting system to the app by scanning the QR code. Voters decide which institute to trust and then download the app from that institute, or if they have a background in computer science, they might decide to implement the app themselves. The app weakens the need for the assumptions that Helios relies on with respect to integrity. As opposed to Helios, the app requires that either the smartphone or the app, and either the voting client or the voting platform do not collaborate. More information about the proposed approach can be found here.

In order to compare all of the proposed improvements and the original Helios implementation, a user study has been conducted. The study was performed as a comparative between-subjects lab study with 95 participants. As the scenario for the study,  the German Federal Parliament (Bundestag) election, which took place on September 24th 2017, were chosen. The study was analysed quantitatively as well as qualitatively. The quantitative evaluation showed that the automatic and mobile approaches are significantly more efficient than the Helios implementation. Furthermore, regarding the effectiveness of the verification, 81.25% of participants were able to verify their vote using the automatic and the mobile approaches, compared to 61.3% for the Helios implementation. The qualitative evaluation also revealed that the participants using the mobile approach encountered problems that can be mitigated by improving the implementation, such as time delays in scanning the QR code. The other approaches, however, suffer from problems that cannot be mitigated by implementation or interface improvements, such as the need of the participants to compare the output verification codes inherent to the approach. It can therefore be concluded that the mobile approach can be recommended for further usage in elections; however, as not all of the voters have access to smartphones, the automatic approach should be offered as an alternative.

More information about the study and the results of the analysis can be found here.

Acceptance of Code Voting

Another well-known approach to address the issue of a malicious voting platform is code voting. Code voting systems differ regarding their security level: some ensure either vote secrecy or vote integrity, while others ensure both. Generally, the idea behind code voting approaches is that election authorities provide voters with so called code sheets that are distributed via postal mail. In order to cast a vote, voters enter the voting code assigned to their preferred candidate and/or party. Upon receiving the voting code, election authorities acknowledge its receipt by sending to the voters a so-called confirmation code. At this step voters are encouraged to compare the received confirmation code, displayed by their voting device, with the confirmation code on their code sheet. Given the fact that potentially compromised voting devices do neither know valid voting codes, other than the one entered by the voters, nor the relation between voting codes and candidates, both vote secrecy and vote integrity are ensured against such devices.

Nevertheless, the security gains come at the cost of usability losses: Voters have to enter and compare random codes, rather than just selecting (clicking) their preferred candidate and/or party from a given list. The competition between security and usability is well-known, and both of these aspects are of fundamental importance to the acceptance of new voting technologies. Determining to which extent voters are ready to trade usability for security has a very high practical relevance, as it would allow decision makers to identify the adequate Internet voting system with respect to their election setting.

The research group thereby conducted a user study in the context of the university elections at TU Darmstadt, where three different voting systems were considered: One that is vulnerable to compromised voting devices (secure platform problem) and two code voting systems, which provide different levels of security. A total of 23 participants took part in the study. Participants were required to cast a vote by using all three systems. After casting their vote with each of the systems, participants filled in the system usability scale (SUS) questionnaire. Then, participants were required to indicate which system they would prefer to use in the real university elections. 

The results of the study have shown, that a majority of the participants (65%) preferred to use the system that provided the highest level of security, protecting both vote secrecy and vote integrity against a malicious voting device, although the system received the lowest usability score. In order to identify the trade-off between security and usability, i.e. to derive a quantitative model that describes how much usability voters are willing to sacrifice for using a system with higher security, a multinomial logit analysis was conducted. The analysis has shown that voters prefer systems with higher security assurance, unless security gains come at the cost of more than approximately 26 usability points (scale 0- 100) on average.

More information about the study can be found here.

Publications

Mental Models of Verifiability in Voting: Olembo, M.; Bartsch, S.; Volkamer, M. 2013. E-voting and identify : 4th international conference ; proceedings / Vote-ID 2013, Guildford, UK, July 17 - 19, 2013. Ed.: James Heather.., 142 - 155, Springer, Berlin. doi:10.1007/978-3-642-39185-9_9

Usability Analysis of Helios - An Open Source Verifiable Remote Electronic Voting System: Karayumak, F.; Olembo, M.; Kauer, M.; Volkamer, M. 2011. EVT/WOTE '11 : 2011 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections, San Francisco, CA, August 8 - 9, 2011, 16 S., USENIX Association, Berkeley, Calif.

User Study of the Improved Helios Voting System Interface: Karayumak, F.; Kauer, M.; Olembo, M.; Volk, T.; Volkamer, M. 2011. 1st Workshop on Socio-Technical Aspects in Security and Trust (STAST 2011) : Milan, Italy, 8th September 2011. Ed.: Giampaolo Bella .., 37-44, IEEE Digital Library, Piscataway, NJ. doi:10.1109/STAST.2011.6059254

Helios Verification: To Alleviate, or to Nominate: Is That The Question, Or Shall We Have Both?: Neumann, S.; Olembo, M. M.; Renaud, K.; Volkamer, M. 2014. International Conference on Electronic Government and the Information Systems Perspective (EGOVIS), Munich, Germany, September 1 - 3, 2014. Ed.: A. Kö, 246-260, Springer, Cham. doi:10.1007/978-3-319-10178-1_20

What Did I Really Vote For? - On the Usability of Verifiable E-Voting Schemes: Marky, K.; Kulyk, O.; Renaud, K.; Volkamer, M. 2018. Conference on Human Factors in Computing Systems (CHI), Montreal, QC, Canada, April 21 - 26, 2018, Paper 176/1-13, ACM, New York (NY). doi:10.1145/3173574.3173750

Nothing Comes for Free: How Much Usability Can You Sacrifice for Security?: Kulyk, O.; Neumann, S.; Budurushi, J.; Volkamer, M. 2017. IEEE security & privacy, 15 (3), 24-29. doi:10.1109/MSP.2017.70