Programming environments and game environments share many of the same characteristics, such as requiring their users to understand strategies and solve difficult challenges. Yet, only game designers have been able to capitalize on methods that are consistently able to keep their users engaged. Consequently, software engineers have been increasingly interested in understanding how these game experiences can be transferred to programming experiences, a process termed gamification.
In this perspective paper, we offer a formal argument that gamification as applied today is predominately narrow, placing emphasis on the reward aspects of game mechanics at the expense of other important game elements, such as framing. We argue that more authentic game experiences are possible when programming environments are re-conceptualized and assessed as holistic, serious games. This broad gamification enables us to more effectively apply and leverage the breadth of game elements to the construction and understanding of programming environments.
Large software organizations are transitioning to event data platforms as they culturally shift to better support data-driven decision making. This paper offers a case study at Microsoft during such a transition. Through qualitative interviews of 28 participants, and a quantitative survey of 1,823 respondents, we catalog a diverse set of activities that leverage event data sources, identify challenges in conducting these activities, and describe tensions that emerge in data-driven cultures as event data flow through these activities within the organization. We find that the use of event data span every job role in our interviews and survey, that different perspectives on event data create tensions between roles or teams, and that professionals report social and technical challenges across activities.
Grounded theory is an important research method in empirical software engineering, but it is also time consuming, tedious, and complex. This makes it difficult for researchers to assess if threats, such as missing themes or sample bias, have inadvertently materialized. To better assess such threats, our new idea is that we can automatically extract knowledge from social news websites, such as Hacker News, to easily replicate existing grounded theory research — and then compare the results. We conduct a replication study on static analysis tool adoption using Hacker News. We confirm that even a basic replication and analysis using social news websites can offer additional insights to existing themes in studies, while also identifying new themes. For example, we identified that security was not a theme discovered in the original study on tool adoption. As a long-term vision, we consider techniques from the discipline of knowledge discovery to make this replication process more automatic.
Spreadsheets are perhaps the most ubiquitous form of end-user programming software. This paper describes a corpus, called Fuse, containing 2,127,284 URLs that return spreadsheets (and their HTTP server responses), and 249,376 unique spreadsheets, contained within a public web archive of over 26.83 billion pages. Obtained using nearly 60,000 hours of computation, the resulting corpus exhibits several useful properties over prior spreadsheet corpora, including reproducibility and extendability. Our corpus is unencumbered by any license agreements, available to all, and intended for wide usage by end-user software engineering researchers. In this paper, we detail the data and the spreadsheet extraction process, describe the data schema, and discuss the trade-offs of Fuse with other corpora.
The lead author of the paper is Kubick Lubick. The abstract of the paper follows:
An effective way to learn about software development tools is by
directly observing peers’ workflows. However, these tool knowledge transfer events happen infrequently because developers must be both colocated and available. We explore an online social screencasting system that removes the dependencies of colocation and availability while maintaining the beneficial tool knowledge transfer of peer observation. Our results from a formative study indicate these online observations happen more frequently than in-person observations, but their effects are only temporary. We conclude that while peer observation facilitates online knowledge transfer, it is not the only component — other social factors may be involved.
Developers who use version control are expected to produce systematic commit histories that show well-defined steps with logical forward progress. Existing version control tools assume that developers also write code systematically. Unfortunately, the process by which developers write source code is often evolutionary, or as-needed, rather than systematic. Our contribution is a fragment-oriented concept called Commit Bubbles that will allow developers to construct systematic commit histories that adhere to version control best practices with less cognitive effort, and in a way that integrates with their as-needed coding workflows.
In other words, Commit Bubbles aims to alleviate the “tangled commit” and “non-descriptive commit message” dilemmas that developers routinely encounter when constructing version control commit histories:
Self-explanation is one cognitive strategy through which developers comprehend error notifications. Self-explanation, when left solely to developers, can result in a significant loss of productivity because humans are imperfect and bounded in their cognitive abilities. We argue that modern IDEs offer limited visual affordances for aiding developers with self-explanation, because compilers do not reveal their reasoning about the causes of errors to the developer.
The contribution of our paper is a foundational set of visual annotations that aid developers in better comprehending error messages when compilers expose their internal reasoning. We demonstrate through a user study of 28 undergraduate Software Engineering students that our annotations align with the way in which developers self-explain error notifications. We show that these annotations allow developers to give significantly better self-explanations when compared against today’s dominant visualization paradigm, and that better self-explanations yield better mental models of notifications.
The results of our work suggest that the diagrammatic techniques developers use to explain problems can serve as an effective foundation for how IDEs should visually communicate to developers.
Error notifications, as presented by modern integrated development environments, are cryptic and confusing to developers. My dissertation research will demonstrate that modifying production compilers to expose detailed semantics about compilation errors is feasible, and that these semantics can be leveraged through diagrammatic representations using visual overlays on the source code to significantly improve compiler error notification comprehension.
Error notifications and their resolutions, as presented by modern IDEs, are still cryptic and confusing to developers. We propose an interaction-first approach to help developers more effectively comprehend and resolve compiler error notifications through a conceptual interaction framework. We propose novel taxonomies that can serve as controlled vocabularies for compiler notifications and their resolutions. We use preliminary taxonomies to demonstrate, through a prototype IDE, how the taxonomies make notifications and their resolutions more consistent and unified.
This paper describes the development of subsymbolic ACT-R models for the Concentration game. Performance data is taken from an experiment in which participants played the game under two conditions: minimizing the number of mismatches/turns during a game, and minimizing the time to complete a game. Conflict resolution and parameter tuning are used to implement an accuracy model and a speed model that capture the differences for the two conditions. Visual attention drives exploration of the game board in the models. Modeling results are generally consistent with human performance, though some systematic differences can be seen. Modeling decisions, model limitations, and open issues are discussed.