Flakiness, or non-deterministic behavior of builds and tests, is a major and continuing impediment to software testing. The Flaky Tests Workshop (FTW) aims to bring together industry professionals and academics to discuss the wide effects of test flakiness, propose solution approaches, report on the industrial state of practice, and present empirical studies, as well as case studies.
Status
The International Flaky Tests Workshop (FTW) returns and we would love to invite you and your colleagues to submit your work on test flakiness and non-determinism in testing. Like last year, we are co-located to ICSE and we are offering two submission formats:
- Extended abstract (max. 2 pages including references): New ideas, problems and challenges, view points, work in progress.
- Short paper (max. 8 pages including references): Technical research, experience reports, empirical studies.
The submission deadline is November 11th 2024. You can find further information about the submission process on the FTW website. Further details about the workshop’s agenda will follow soon. We look forward to your participation!
Organizing Committee
Owain Parry is an early-career researcher working on flaky tests. He published multiple papers in this field, including an extensive systematic literature review on the topic, a developer survey, and a machine learning-driven technique for detecting flaky tests.
Martin Gruber is a Software Engineer working with the BMW Group, where he is concerned with software quality and testing in CI environments. Furthermore, he is currently finalizing his dissertation on the understanding and mitigation of test flakiness.
August Shi is an Assistant Professor in the Chandra Family Department of Electrical and Computer Engineering at the University of Texas at Austin. August has worked extensively in the area of flaky test research, publishing work on detecting, debugging, and repairing flaky tests. For his work on flaky tests, August has been awarded a SIGSOFT Outstanding Doctoral Dissertation Award and an NSF CAREER Award. August received his PhD in Computer Science from the University of Illinois at Urbana-Champaign.
Wing Lam is an Assistant Professor in the Computer Science department at George Mason University. Dr. Lam works on several topics in software engineering, with a focus on software testing. His research improves software dependability by characterizing bugs and developing novel techniques to detect and tame bugs. Wing has published in top-tier conferences, such as ESEC/FSE, ICSE, ISSTA, OOPSLA, and TACAS. His techniques have helped detect and fix bugs in open-source projects and have impacted how Dragon Testing, Microsoft, and Tencent developers test their code. Wing has been the recipient of several awards, including an ACM SIGSOFT Outstanding Doctoral Dissertation Award and an NSF CAREER award. More information is available on his web page.
Steering Committee
Tim A. D. Henderson is Staff Software Engineer at Google working on Google’s proprietary CI platform: the Test Automation Platform (TAP). Dr. Henderson specializes in testing, fault localization, semantic code clone detection, program analysis, graph mining, and databases.
Gordon Fraser is a full professor in Computer Science at the University of Passau, Germany. He received a PhD in computer science from Graz University of Technology, Austria, in 2007, worked as a post-doc at Saarland University, and was a Senior Lecturer at the University of Sheffield, UK. The central theme of his research is improving software quality, and his recent research concerns the prevention, detection, and removal of defects in software. He has been facing flaky tests in particular since the creation of the automated test generation tool EvoSuite. He has chaired major software engineering conferences (e.g., ASE, ISSTA, ICST) and workshops (e.g., Mutation, A-MOST, AST, TAIC-PART, CSI-SE, CSTVA, Gamify).
Phil McMinn has been a Lecturer in the Computer Science department at the University of Sheffield since 2006. He was awarded his PhD in 2005, which was funded by DaimlerChrysler Research and Technology. He has published several papers in the field of search-based testing. His research interests cover software engineering, with a particular focus on software testing, program transformation and agent-based systems and modelling. His research has been funded by the UK Engineering and Physical Science Research Council (EPSRC) to work on reducing oracles costs of testing, testing techniques for agent-based systems and the automatic reverse engineering of state machine descriptions from software.