Reading assignment for ESE 2013 course

Dates

  • Assignment posted on 25 April 2013.
  • Student presentations due on 30 April 2013.

Rationale

Students are supposed to study ESE papers at an early stage of the course to develop their own skills of observing, extracting, evaluating research methods. The course will continue to introduce specific research methods in more detail, but this assignment provides a "casual" means of briefly looking at diverse ESE topics. This will also give us some more ideas about what ESE is generally concerned with these days.

Assignment

Read at least one paper from the list provided below and prepare a 2-5mins short talk to summarize some elements of empirical research for the given paper. You do not need to address all the items listed below; they may very well be irrelevant for the paper at hand or simply too hard to determine. See the introduction lecture of this course for some of the terms used in the following check list. You are welcome to also discuss other aspects of the paper.

  • Is an established research method used? (Controlled experiment, case study, survey research, …)
  • Does the paper state research questions? (Which?)
  • What kind of research questions are described? (Existence, description, classification, …)
  • What data collection techniques are used? (Questionnaire, program analysis, IDE instrumentation, performance measurement, …)
  • Does the paper use hypotheses? (Which?)
  • Does the paper describe a research theory? (What elements?)
  • Does the paper discuss threads to validity? (Which?)
  • Does the paper appear to present reproducible research?

Please try to embed all your facts into a very short talk that gives the audience enough of an idea about the paper's research domain and problem. Given the time constraints (for preparing and presenting), your presentation will be somewhat superficial. This is Ok; we will get to more in-depth discussions later. Also, you are not really expected to read and understand the chosen paper in all detail. Diagonal reading (1-2 hours) should be enough in many cases.

Papers for reading exercise

All proposed papers originate from the ICSE 2012 proceedings.

Table of contents available at DBLP:

http://www.informatik.uni-trier.de/~ley/db/conf/icse/icse2012.html

Table of contents and papers available at ACM DL:

http://dl.acm.org/citation.cfm?id=2337223

Students of this university have free ACM DL access.

ACM DL access only works inside the uni network. So download your paper when on the network. Most papers are also freely available and simply located with the help of Bing or Google.

  • Sound empirical evidence in software testing
  • Combining functional and imperative programming for multicore software: an empirical study evaluating Scala and Java
  • The impacts of software process improvement on developers: a systematic review
  • A systematic study of automated program repair: fixing 55 out of 105 bugs for $8 each
  • Where should the bugs be fixed? - more accurate information retrieval-based bug localization based on bug reports
  • WhoseFault: automatic developer-to-fault assignment through fault localization
  • Recovering traceability links between an API and its learning resources
  • Characterizing logging practices in open-source software
  • Uncovering performance problems in Java applications with reference propagation profiling
  • Performance debugging in the large via mining millions of stack traces
  • Predicting performance via automated feature-interaction detection
  • Privacy and utility for defect prediction: experiments with MORPH
  • Bug prediction based on fine-grained module histories
  • Reconciling manual and automatic refactoring
  • WitchDoctor: IDE support for real-time auto-completion of refactorings
  • Use, disuse, and misuse of automated refactorings
  • Test confessions: a study of testing practices for plug-in systems
  • How do professional developers comprehend software?
  • Asking and answering questions about unfamiliar APIs: an exploratory study
  • Leveraging test generation and specification mining for automated bug detection without false positives
  • Axis: automatically fixing atomicity violations through solving control constraints
  • Content classification of development emails
  • Integrated impact analysis for managing software changes
  • An empirical study about the effectiveness of debugging when random test cases are used
  • Disengagement in pair programming: does it matter?
  • Development of auxiliary functions: should you be agile? an empirical assessment of pair programming and test-first programming
  • Static detection of resource contention problems in server-side scripts
  • Amplifying tests to validate exception handling code
  • MagicFuzzer: scalable deadlock detection for large-scale applications
  • Does organizing security patterns focus architectural choices?
  • Enhancing architecture-implementation conformance with change management and support for behavioral mapping
  • A tactic-centric approach for automating traceability of quality concerns
  • Build code analysis with symbolic evaluation
  • Synthesizing API usage examples
  • Semi-automatically extracting FAQs to improve accessibility of software development knowledge
  • Temporal analysis of API usage concepts
  • Inferring method specifications from natural language API descriptions
  • Automatic parameter recommendation for practical API usage
  • Statically checking API protocol conformance with mined multi-object specifications
  • Characterizing and predicting which bugs get reopened
  • Understanding the impact of pair programming on developers attention: a case study on a large industrial experimentation
  • How much does unused code matter for maintenance?