European Conference on Object-Oriented Programming Retrospective

Through my research in HCI and software engineering tools, I typically make the annual rounds in conferences such as ICSE, FSE, and VL/HCC. Thus, having the opportunity to attend ECOOP 2016 was definitely outside of my comfort area, but it was worthwhile in that it provided me with exposure to an otherwise enigmatic area of Computer Science.

As a first-timer to ECOOP, I expected all of the conference to basically be about programming language (PL) theory (or as we call it in HCI, the land of Greek letters). I was surprised, however, to find workshops on the usability aspects of programming languages. I attended two such workshops:

  • LIVE, a workshop on live programming systems that “abandon the traditional edit-compile-run cycle in favor of fluid user experiences.”
  • Grace, a workshop on the emerging Grace programming language. Originating at ECOOP in 2010, the Grace programming language is designed to allow novices to discover object-oriented programming in simpler ways.

To gain exposure to new ideas, I also attended ICOOOLPS, a workshop on compiler optimization and performance for object-oriented programming.

Although my own research community and ECOOP have relatively little intersection, through these smaller workshops I quickly met new colleagues, including James Noble, Michael K├Âlling, and my own PhD advisor’s advisor, Andrew Black.

An aspect of ECOOP that I particularly appreciated was the morning breakfast sessions, where students like myself were paired with faculty members to learn more about ECOOP research. I took full advantage of these sessions and introduced myself to a new faculty member each day for breakfast: Matthias Felleisen, Tobias Wrigstad, Laurence Tratt, and Jan Vitek.

Another highlight of the conference was the ECOOP Summer School. The lecturers for these talks made a significant effort to provide a gentle introduction to programming language theory and to explain the types of problems researchers in PL study. One of the more memorable lectures was a hands-on session by Laurence Tratt and Carl Friedrich Bolz, where we worked on our laptops to implement a JIT in Python.

ECOOP Summer School

Thanks again to the NSF for providing this amazing opportunity.

Thesis proposal: How should static analysis tools explain anomalies to developers?

Eclipse Explanations

On April 26, 2016, I presented my thesis proposal to a committee of five members: Dr. Emerson Murphy-Hill (Chair), Dr. Jing Feng (Graduate School Representative), Dr. Shriram Krishnamurthi (External Member), Dr. James Lester, and Dr. Christopher Parnin.

I received a conditional pass. A conditional pass means that a formal re-examination is not required, but that the committee expects additional revisions before approving the proposal.

I suspect that there are some students who do not even realize that they have received a conditional pass, since the event does not seem to be recorded anywhere that is student-accessible.

In the weeks that followed, I made several revisions to the thesis proposal document, incorporating feedback from the presentation:

  1. The committee reduced the scope of required experiments from five to three.
  2. The committee added a new requirement that I conduct a systematic literature review on static analysis notification techniques.
  3. I added a thesis contract to explicitly state the dissertation deliverables.

On May 11, 2016, I submitted the revised proposal to the committee.

On May 20, 2016, I was notified that the committee had approved the revisions.

Although some students prefer to keep their thesis proposal secret until graduation, I have made the proposal and presentation materials available so that they may help other students in structuring their own proposals:

Abstract

Despite the advanced static analysis tools available within modern integrated development environments (IDEs) for detecting anomalies, the error messages these tools produce to describe these anomalies remain perplexing for developers to comprehend. This thesis postulates that tools can computationally expose their internal reasoning processes to generate assistive error explanations in a way that approximates how developers explain errors to other developers and to themselves. Compared with baseline error messages, these error explanations significantly enhance developers’ comprehension of the underlying static analysis anomaly. The contributions of this dissertation are: 1) a theoretical framework that formalizes explanation theory in the context of static analysis anomalies, 2) a set of experiments that evaluate the extent to which evidence supports the theoretical framework, and 3) a proof-of-concept IDE extension, called Radiance, that applies my identified explanation-based design principles and operationalizes these principles into a usable artifact. My work demonstrates that tools stand to significantly benefit if they incorporate explanation principles in their design.

Residency

I’ve been approved for in-state tuition rates at North Carolina State University starting this semester.

The review of your application and supporting documentation has been completed. I am pleased to advise that you have been reclassified as in-state for tuition purposes effective Spring 2012 semester.

At least something was accomplished this week.

Scrabblesque, A Research Game

Our research study on Scrabblesque has gone live. Please take a few moments to play the game. You may even find that it is fun to play!

It turns out that even “simple” word games like Scrabble have a lot of inherent complexity. In the context of artificial intelligence, creating computer opponents that are believable is a fundamentally different problem than creating computer opponents that are optimal. In many ways, it is much more difficulty, given that humans are not particularly rational in the first place.

The game also allows us to cognitively model player actions. For example, there are a lot of interesting questions in this domain, of which I list only a small subset:

  • When and why do people shuffle before selecting their final word?
  • Do people change their word selection before submitting?
  • When do people swap tiles?
  • How often do people play optimal words? hat are time intervals between word selection?
  • Can a human player identify whether their opponent is a human player or a computer?

Even data concerning user interfaces in games is not easily available. For instance, we may wish to know how often a player makes an error due to a button being to small or being placed in an awkward location. Scrabblesque allows us to perform this kind of HCI analysis.

So please help out and put some words on the board!

Discussions