ICSE 2011 Technical Briefings

Requirements Traceability in Software Intensive Systems

Software Requirements traceability provides support for essential software and systems engineering activities, such as requirements validation, impact analysis, architectural preservation, compliance verification, and regression testing. Unfortunately, despite its conceptual simplicity and advances in both practice and research, traceability is still arduous, costly, and error-prone to implement in non-trivially sized projects.

In this presentation, Dr. Jane Cleland-Huang and Dr. Jane Huffman Hayes will describe the current state of traceability practice including best practices and challenges, and will highlight current areas of traceability research that show significant promise for addressing these challenges. They will also describe several short and long-term traceability research goals identified by members of the Center of Excellence for Software Traceability. These goals define a research agenda that is expected to propel traceability research and practice forward over the next 5-10 years.

Finally, the presenters will demonstrate the alpha release of TraceLab 1.0, a novel tool for supporting comparative experiments and benchmarking in the traceability research community. TraceLab is funded under the US National Science Foundation Major Research Instrumentation Grant # CNS 0959924.

Jane Cleland-Huang is an associate professor in the School of Computing at DePaul University, where she serves as the director of the Systems and Requirements Engineering Center. She received her PhD degree in computer science from the University of Illinois at Chicago. Her research interests focus around software and systems traceability with an emphasis on the application of machine learning and information retrieval methods to automate the creation and maintenance of traceability links. She currently serves as the North American Director of the International Center of Excellence for Software Traceability. Dr. Cleland-Huang has served as Principal Investigator on research grants funded at over $3.5 Million, including the NSF CAREER award, and a $2 Million NSF Major Research Instrumentation award. She has engaged in technology transfer projects with Siemens Corporate Research, Microsoft, Lockheed Martin, and S2ERC (the Software and Security Engineering Research Consortium). Dr. Cleland-Huang currently serves as Associate Editor for IEEE Transactions on Software Engineering, and is on the editorial board for the Requirements Engineering journal, and the steering committee for the International Requirements Engineering Conference. She is a member of the IEEE Computer Society and the IEEE Women in Engineering.

Jane Huffman Hayes  holds a Ph.D. in Information Technology and an M.S. in Computer Science and has over 16 years experience performing and managing IV&V on mission and safety-critical software programs as a Senior Member of the Technical Staff and Operation Manager for Science Applications International Corporation (SAIC).  Dr. Hayes has been the PI, Co-PI, or Assistant PI on many research grants including improving the state of the art in verification and validation of conventional and knowledge-based software for nuclear power plants (funded by U.S. NRC and EPRI).  Dr. Hayes was the requirements specifier and chief architect for several projects at SAIC.  Dr. Hayes has publications on numerous topics including fault-based analysis for requirements and requirements tracing.  She serves on the editorial board of the Journal of Software Testing Verification and Reliability and serves as a reviewer for numerous archival publications including the IEEE Transactions on Software Engineering.  She is on the advisory board for the Traceability of Emerging Forms of Software Engineering (TEFSE) workshop, and has served on the Program Committee on many conferences including the IEEE International Conference on Requirements Engineering.  She is one of the founding members, and the current elected Director, of the Center of Excellence for Software Traceability.

Studying Software Engineering as a Human Activity

Many researchers are interested in issues surrounding software developers’ work, like individual productivity and team communication. However, studying these issues often requires research techniques that are typically not taught in software engineering curricula, but are more common in fields like human-computer interaction, psychology and social sciences. The goal of our technical briefing is to provide a beginner’s guide to studying software engineering as a human activity. We’ll provide a quick survey of relevant techniques, including discount usability methods, data mining, interviews and surveys, grounded theory, ethnographies, and controlled lab studies.

The bulk of the time will be dedicated to covering three of these techniques in depth, namely discount usability, interviews, and lab studies. Attendees will benefit in several ways. First, they will understand the surveyed techniques well enough to appreciate papers that use the techniques. The survey will include discussions of benefits and limitations, with pointers to exemplary papers that use the techniques. Second, for the techniques covered in detail, they will come away with best practices and have the opportunity to build their skills through interactive learning. Finally, there will be an open-ended questioning period where attendees can pick the brains of the presenters and other experienced attendees.

Robert DeLine is a principal researcher at Microsoft Research, working at the intersection of software engineering and human-computer interaction. His research group designs development tools in a user-centered fashion: they conduct studies of development teams to understand their work practice and prototype tools to improve that practice. He received his PhD from Carnegie Mellon University in 1999 and his BS/MS from the University of Virginia in 1993.

Emerson Murphy-Hill is an assistant professor at North Carolina State University. By conducting formative studies, building tools based on the findings, and then evaluating the effect that those tools have on software developers’ work, his research aims to bridge the gap between the capabilities of tools and how software developers actually use them. He received his PhD from Portland State University in 2009 and is BS from The Evergreen State College in 2001.

Click here to download materials.
Optimising Software Testing

Techniques for optimising software testing have received a great deal of interest in the past ten years, leading to a coherent body of work with important achievements and exciting prospects. This technical briefing will provide an introduction to recent advances in optimisation techniques for software testing. The primary focus of the briefing will be on Search Based Software Engineering (SBSE) for Software Testing. The briefing will include an overview of the underlying technologies, making the briefing self contained. It will present recent advances, results and directions for further work in Search Based Optimisation for several areas of Software Testing, including Automated Test Data Generation, Regression Testing, Mutation Testing, Configuration Testing, Integration Testing and Temporal Testing.

Mark Harman is professor of Software Engineering in the Department of Computer Science at University College London, where he is the director of the CREST centre. He is widely known for work on source code analysis and testing and he was instrumental in the founding of the field of Search Based Software Engineering (SBSE). He has given 18 keynote invited talks on SBSE, Source Code Analysis and Testing and is the author of over 170 refereed publications on these topics. He serves on the editorial board of 7 international journals and has served or will serve on the programme committees of 110 conferences (including ISSTA, ICST, ICSE and FSE).

Software Engineering for Secure Systems

New security threats to, and vulnerabilities in, software systems emerge almost daily. As security-critical systems have become commonplace, the last decade has seen the emergence of a wide range of software engineering approaches targeted at addressing these threats and vulnerabilities. In this technical briefing, we provide an overview of developments in software engineering targeted particularly at secure software-intensive systems. We discuss the current state of art and practice in terms of the foundations, techniques, tool-support and industrial applications of secure software engineering, then consider promising research developments. A particular emphasis in our outlook will be on security requirements engineering and on model-based security. We discuss these in the context of secure system evolution - as software systems become increasingly long-lived and undergo changes throughout their lifetime. We also present some more recent software engineering research challenges for achieving adaptive security, when systems and their environment change rapidly and in unexpected ways.

Jan Jürjens is Professor for Software Engineering at Technical University Dortmund (Germany), Scientific Coordinator "Enterprise Engineering" at Fraunhofer Institute for Software and Systems Engineering ISST (Dortmund), and Senior Member of Robinson College (Univ. Cambridge, UK). He is Scientific Director of an Integrated Project financed by the EU. He has been PI of various projects, often in cooperation with industry (e.g. Microsoft Research (Cambridge)). Previous positions include a Senior Lecturership at Open University (UK), a Royal Society Industrial Fellowship at Microsoft Research Cambridge and a non-stipendiary Research Fellowship at Robinson College (Univ. Cambridge). Jan holds a Doctor of Philosophy in Computing from University of Oxford and is author of "Secure Systems Development with UML" (Springer, 2005; Chinese translation 2009) and various publications mostly on software engineering and IT security, totaling over 2000 citations.

Bashar Nuseibeh is Professor of Software Engineering and Chief Scientist at Lero - the Irish Software Engineering Research Centre, and Professor of Computing and former Director of Research in Computing at The Open University (OU). He is a Visiting Professor at Imperial College London and the National Institute of Informatics, Japan. Earlier in his career he was a Reader at Imperial College and Head of its Software Engineering Laboratory. His research interests are in software requirements and design, security and privacy, process modelling and technology, and technology transfer. He has published over 150 refereed papers and consulted widely with industry, working with organisations such as the UK National Air Traffic Services (NATS), Texas Instruments, Praxis Critical Systems, Philips Research Labs, and NASA. He has also served as Principal or Co-Investigator on a number of research projects on software engineering, security engineering, and learning technologies.

Mining Software Engineering Data

Software engineering data (such as code bases, execution traces, historical code changes, mailing lists, and bug databases) contains a wealth of information about a project’s status, progress, and evolution as well knowledge about a software system’s usage in the field. Using well-established data mining techniques, practitioners and researchers can explore the potential of this valuable data in order to better manage their projects and to produce higher quality software systems that are delivered on time and within budget. This briefing will greatly benefit from the popularity of the MSR field and will help increase awareness of this important and promising field.

The main goals of the briefing:

  1. Present Success Stories
    We will present success stories of the use of data mining techniques to solve software engineering problems. We will highlight areas which have gained industrial adoption and will present our thoughts on possible future success stories. Attendees would gain a valuable appreciation of the potential of techniques for mining softwarem,engineering data on improving and supporting research results throughout software engineering.

  2. Highlight Current Challenges
    We will highlight current challenges and present solutions in mining software engineering data. Challenges associated with data quality and data linking across repositories. Attendees would get a sampler of many common solutions and techniques used in the field of mining software engineering data.

  3. Outline Future Research Directions
    We will outline open challenges in the field of mining software engineering data. Attendees would get directions of possible research topics that they can explore and possible ways to start working in this exciting field.

Our briefing extends and builds on the first part of our prior ICSE 2007-2010 tutorials. The slides are online at: http://research.cs.queensu.ca/~ahmed/home/teaching/CISC880/F10/slides/dmse-icse10-tutorial.pdf. The briefing will not discuss the peculiarities of using data mining techniques on software engineering data, instead we will primarily highlight the latest results and success stories while describing the challenges in repository data extraction and linking. Dr. Tim Menzies is proposing a briefing that tackles the peculiarities of using data mining techniques.

Ahmed E. Hassan is the NSERC RIM Industrial Research Chair in Software Engineering of Ultra Large Scale Software Engineering. He is currently at the School of Computing at Queen’s University in Canada. He received both the Ph.D. and MMath degrees from the School of Computer Science at the University of Waterloo in Canada. His research interests include mining software engineering data, performance engineering, and distributed fault tolerant systems. Dr. Hassan spent the early part of his career (5 yrs) helping architect the Blackberry wireless platform at Research In Motion (RIM). He contributed to the development of protocols, simulation tools, and software to ensure the scalability and reliability of RIM’s global infrastructure. Dr. Hassan spearheaded the organization and creation of the Mining Software Repositories (MSR) workshop series (http://msrconf.org/) at ICSE and its associated research community. He recently coedited a special issue of the IEEE Transaction on Software Engineering (TSE) on MSR. He is currently the chair of the steering committee for the MSR working conference, the largest co-located event with ICSE over the past seven years. He served as the program chair for WCRE 2008. He presented tutorials on Mining Software Engineering Data, with Dr. Tao Xie, 3 hours, at ICSEs 2007-2010.

Tao Xie received his Ph.D. in Computer Science from the University of Washington at Seattle in 2005. He is currently an Assistant Professor in the Department of Computer Science at North Carolina State University since 2005. His research interests are in automated software testing and mining software engineering data. He has served as ACM SIGSOFT history liaison in the SIGSOFT Executive Committee. He received a National Science Foundation Faculty Early Career Development (CAREER) Award in 2009. He received 2008 and 2009 IBM Faculty Awards and a 2008 IBM Jazz Innovation Award. He was Program Co-Chair of 2009 IEEE International Conference on Software Maintenance (ICSM) as well as Student Papers Track Program Co-Chair of ICST 2008 and Tutorial Co-Chair of ASE 2009. He has served on program committees of various conferences and workshops, including ICSE, ASE, ISSTA, and WWW. He has coorganized WebTest 2009 and TAV-WEB 2008. He is a co-organizer of a Dagstuhl Seminar on Mining Programs and Processes in 2007 and Dagstuhl Seminar on Practical Software Testing: Tool Automation and Human Factors. He has served on program committees of ICSE, ISSTA, WWW, and ASE.

He co-presented tutorials (with Ahmed E. Hassan) on Mining Software Engineering Data, 3 hours, at ICSE 2009, 2008, and 2007, a tutorial (with Nikolai Tillmann and Jonathan ‘Peli’ de Halleux) on Parameterized Unit Testing, 3 hours, at ICSE 2009, a tutorial (with Chao Liu and Jiawei Han) on Mining for Software Reliability, 5 hours, at ICDM 2007, a tutorial (with Jian Pei) on Data Mining for Software Engineering, 3 hours, at KDD 2006.

Click here to download materials.
Towards Industrialization of Business Application Development Using a Model-driven Approach

We discuss our experience in using model-driven techniques to build large business applications on a variety of architectures and technology platforms. Our foray into model-driven techniques began in mid-90s when our organization decided to offer custom offerings in Banking domain that were to be capable of being delivered on multiple technology platforms, and capable of easily keeping pace with technological advance. We began by developing a set of modeling notations to specify different architectural layers of the application and a set of code generators that transform these models into an implementation. Separating business functionality from technological concerns, and model-based code generation resulted in significant productivity and quality gains. Modeling of workspaces and a role-based process enabled a large team to effectively coordinate application development effort leading to significant reduction of integration effort. Encouraged by these benefits, many large development projects also readily adopted the model-driven approach despite high initial investment in learning how to model. Enthusiastic, and somewhat unexpected, acceptance of our approach led to an ironical situation of the productivity toolset team becoming a bottleneck. We overcame this problem through use of product line techniques so as to model the code generators as a family, and automatically generating a purpose-specific implementation therefrom. During past few years, we extended the family concept to applications being generated thus enabling easy configurability and extensibility. Right now we are experimenting with suitable adaptations of agile methodologies for managed evolution of large application families.

The talk will cover the above journey, experiences, lessons learnt, and a way forward for model-oriented software engineering as we see it.

Vinay Kulkarni is a Principal Scientist at Tata Consultancy Services. His research interests include model-driven software engineering, software product lines and business process management. His work in model-driven software engineering has led to a toolset that has been used to deliver several large business-critical IT systems over the past 15 years. Vinay has several patents to his credit, and has authored several papers in scholastic journals and conferences worldwide. He also led standardization effort of one of the key OMG MDD standards. He holds a Masters degree in Electrical Engineering from the Indian Institute of Technology, Madras. Vinay serves on the program committees of MoDELS, SEKE, ISEC, ECMFA and other.

Symbolic Execution and Software Testing

Symbolic execution is a program analysis technique that has become increasingly popular in recent years, due to algorithmic advances and availability of computational power and constraint solving technology.

We review different flavors of symbolic execution, ranging from generalized symbolic execution to dynamic symbolic execution or concolic testing. We also identify challenges to symbolic execution, such as dealing with multi-threading, input data structures, and complex mathematical constraints, as well as scalability challenges due to the path explosion problem. We discuss techniques and tools that address these challenges. Finally we discuss the application of symbolic execution to software testing.

Corina Pasareanu, PhD, is a senior researcher at NASA Ames Research Center, in the Robust Software Engineering Group. She is affiliated with Carnegie Mellon University, the Silicon Valley campus. At Ames, she is investigating the use of abstraction and symbolic execution in the context of the Java PathFinder (JPF) model checking tool-set, with applications in test-case generation and error detection. She is also working on automating assume-guarantee compositional verification, using automata learning techniques. Together with her colleagues, she has developed Symbolic PathFinder, a symbolic execution tool for Java bytecode that is built on top of JPF. Symbolic PathFinder has been used at NASA, in academia and in industry. Corina is an Associate Editor for the ACM TOSEM Journal and she is the co-chair for the 26th International Conference on Automated Software Engineering (2011). Corina has published numerous articles in the areas of software engineering and formal methods and she has served on program committees for conferences such as ICSE, FSE, ISSTA, CAV, ASE, etc.

Empirical Software Engineering, Version 2.0

The rapid pace of software development innovation challenges empirical software research to keep up, if it is to deliver actionable and useful results to practitioners. The empirical software engineering research field has not always been able to deliver this. Recently, it has become increasingly apparent that rigorous data collection and analysis can be so expensive and time-consuming that empirical software engineering studies, which seek to understand the costs and benefits of software development solutions in practice, greatly lag the pace of innovation in the field. In too many cases, a trusted body of empirical results can only be built up after the innovative solutions that they are studying are already well on their way to obsolescence or standard practice. However, we argue that recent advances put a sustainable and increased research pace within our reach.

A suitably scaled-up and nimble empirical research approach must be based upon:

  • The “crowdsourcing” of tough empirical problems. Ben Shneiderman advocates Science 2.0: a vast space of web-based data which everyone can analyze, and where anyone might find important new insights.

The growth of the World Wide Web ... continues to reorder whole disciplines and industries. ... It is time for researchers in science to take network collaboration to the next phase and reap the potential intellectual and societal payoffs. [1]

In Science 2.0, the pace of discovery and communication is increased by orders of magnitude over current practice [2]. A Science 2.0 approach to empirical software engineering addresses fundamental weaknesses in contemporary software engineering research.

  • Automated or computer-assisted approaches to data synthesis, analysis, and interpretation.
  • The ability to connect technical issues, data, and results back to the business drivers that affect an organization’s resource availability.
  • Low-cost, non-intrusive ways for:
    • Getting results to practitioners;
    • Allowing practitioners to comment upon and refine the results;
    • Suggesting what practitioners should do with this information.

In this talk, we discuss each of these four areas and the technologies that make each possible, using real results from practice to illustrate the points. We furthermore suggest how these approaches can be used to better share and leverage results across the community of empirical researchers, which is necessary to enable scaling up to the tougher questions already appearing on the horizon.



  1. B. Shneiderman. Science 2.0. Science, 319(7):1349–1350, March 2008.
  2. Andreas Zeller, keynote, MSR’07, see http://msr.uwaterloo.ca/msr2007/Empirical-SE-2.0-Zeller.pdf
  3. http://promisedata.org/data
  4. Victor Basili, Roseanne Tesoriero, Patricia Costa, Mikael Lindvall, Ioana Rus, Forrest Shull, and Marvin Zelkowitz. Building an experience base for software engineering: A report on the first CeBASe eWorkshop. In Profes (Product Focused Software Process Improvement, pages 110–125, 2001.

Tim Menzies (P.hD, UNSE) is an Assoc. Prof in CSEE at WVU and the author of over 200 referred publications. At WVU, he has been a lead researcher on projects for NSF, NIJ, DoD, NASA's Office of Safety and Mission Assurance, as well as SBIRs and STTRs with private companies. He teaches data mining and artificial intelligence. Tim is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering. In 2012, he will be the co-chair of the program committee for the IEEE Automated Software Engineering conference.

Forrest Shull (P.hD. Maryland) is a senior scientist at the Fraunhofer Center for Experimental Software Engineering in Maryland (FC-MD), a nonprofit research and tech transfer organization, where he leads the Measurement and Knowledge Management Division. At FC-MD, he has been a lead researcher on projects for NASA's Office of Safety and Mission Assurance, the NASA Safety Center, the U.S. Department of Defense, the National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and companies such as Motorola and Fujitsu Labs of America.

As an associate adjunct professor at the University of Maryland College Park, he teaches software engineering in the Professional Master of Engineering program. Forrest has also developed and delivered several courses on software measurement and inspections for NASA engineers. Forrest is Editor in Chief of IEEE Software, which delivers reliable, useful, leading-edge software development information to keep engineers and managers abreast of rapid technology change.

REST: The Emerging Architectural Style for Service Oriented Computing

Recent technology trends in Web services indicate that a solution eliminating the perceived complexity of the WS-* standard technology stack may be in sight: advocates of Representational State Transfer (REST) have come to believe that their ideas explaining why the World Wide Web works are just as applicable to solve enterprise application integration problems and to radically simplify the plumbing required to implement a Service-Oriented Architecture (SOA). In this technical briefing we give an update on how the REST architectural style has been recently rediscovered to become the foundation for so-called RESTful Web services. Our goal is to show that the notions of open systems, dynamic discovery, interoperability, reuse, loose coupling, and statelessness usually associated with service oriented computing can be naturally expressed within the set of constraints given by the REST architectural style. We will answer questions such as: Why is a vast majority of existing public Web services APIs declaring themselves to be RESTful (even if they are not)? Why is REST perceived to be simple (as opposed to WS-*)? Is there anything missing from today’s RESTful Web services so that they can be fully adopted within enterprise architectures? We will use the ensuing discussion to highlight some open research problems related to this emerging and important area of software service engineering.

Cesare Pautasso is assistant professor in the Faculty of Informatics at the University of Lugano, Switzerland. Previously he was a researcher at the IBM Zurich Research Lab and a senior researcher at ETH Zurich, Switzerland. His research group focuses on building experimental systems to explore the intersection of model-driven software composition techniques with business process modeling languages. He is the lead architect of JOpera, a RESTful Web service composition tool for Eclipse. His university teaching, industry training, and consulting activities cover advanced topics related to Service Oriented Architectures, Web Development and emerging Web services and Middleware technologies. He has co-authored a book titled “SOA with REST” published by Prentice Hall. He is also co-editor of the “REST: From Research to Practice” book published by Springer. He has co-chaired the 1st and 2nd International Workshop on RESTful Design (WS-REST) at the WWW conference and is general chair of the 9th European Conference on Web Services (ECOWS 2011).

Click here to download materials.
Patents and Software Engineering

In this technical briefing we will explain what is a patent, and what is a software patent, including the different rules in different locations. We will give a number of examples of patents from different software engineering disciplines. We will discuss different options for getting value out of patents, and the new markets that are now being established for ideas. Finally we will discuss "good" and "bad" patents and the debate about the contribution of patents to the economy as well as the relevant ethic issues.

For any software engineers who are not employees of a large company, and even for some who are, it is important to understand patenting. University professors are now expected to patent, both for the value for themselves and the university, but also as recognized publications. Patents are essential for start-ups and independent innovators to protect the IP they created. Even in large corporations patents are now consider one of the criteria for advancements. This short session will show people how patents could contribute to their career and possibly to their bank account as well.

Dr. Shmuel Ur was a research scientist in IBM research lab in Haifa, Israel for 16 years, where he held the title of IBM Master Inventor, before he became an independent inventor (working with Intellectual Ventures and start-ups). He works in the field of software testing and concentrates on coverage and testing of multithreaded programs. Shmuel taught software testing in the Technion and Haifa University.

Shmuel received his PhD. in Algorithms Optimization and Combinatorics in 1994 in Carnegie Mellon University under Michael Trick and Nobel Prize winner Herbert Simon. Shmuel received his Bs.C. and Ms.C. from the Technion in Israel. Shmuel has published in the fields of hardware testing, artificial intelligence, algorithms, software testing and testing of multi-threaded programs. Shmuel has started and chaired PADTAD, a workshop on testing multi-threaded applications and the Haifa Verification conference and is on program committee of many conferences. Shmuel has more then 60 publications and more then 25 granted patents.

Click here to download materials.
Software Visualization - Principles and Practice

Software visualization is defined as “the use of the crafts of typography, graphic design, animation, and cinematography with modern human-computer interaction and computer graphics technology to facilitate both the human understanding and effective use of computer software.” [SDBP98]. It is a specialization of information visualization[CMS99] (“the use of computer-supported, interactive, visual representations of abstract data to amplify cognition.” ).

Software visualization deals with software, both in terms of run-time behavior (dynamic visualization) and structure (static visualization). It has been widely used by the reverse engineering and program comprehension research community, providnig ways to uncover and navigate information about software systems. It seems odd that, in contrast to information visualization, only very little software visualization research has made it through to practice, by having an impact on integrated development environments.

Indeed, despite modern IDEs such as Eclipse, which support the manipulation of source code at a higher level of abstraction, in the eyes of many a developer at the end of the day programming is equivalent to “writing source code”. I will try to dispel this notion by presenting the principles of software visualization, illustrating the research that is being performed in the domain. Not only can software visualization lead to beautiful pictures of software systems, it may also be the basis for next-generation IDEs.


[CMS99] Stuart K. Card, Jock D. Mackinlay, and Ben Shneiderman, editors. Readings in Information Visualization
— Using Vision to Think. Morgan Kaufmann, 1999.

[SDBP98] John T. Stasko, John Domingue, Marc H. Brown, and Blaine A. Price, editors. Software Visualization —
Programming as a Multimedia Experience. The MIT Press, 1998.

Michele Lanza is associate professor of the faculty of informatics, which he co-founded in 2004. His doctoral dissertation, completed in 2003 at the University of Bern, received the European Ernst Denert award for best thesis in software engineering of 2003. Prof. Lanza received the Credit Suisse Award for best teaching in 2007 and 2009.

At the University of Lugano Prof. Lanza leads the REVEAL research group, working in the areas of software visualization, evolution, and reverse engineering. He authored more than 100 technical papers and the book “Object-Oriented Metrics in Practice”. Prof. Lanza is involved in a number of scientific communities, and has served on more than 60 program committees.

Context-bounded Verification of Concurrent Software

The hypothesis behind context-bounding is that common errors in multithreaded software often manifest themselves in executions with a small number of context-switches. This hypothesis has both theoretical and practical implications towards the goal of efficient testing and verification of multithreaded software. This talk will review the current body of knowledge and the practical tools that have been built around this idea.

Shaz Qadeer is a Senior Researcher in the RiSE group at Microsoft Research Redmond. A goal of his current work is to make concurrent programming mainstream by eliminating the mystique behind it. Towards this end, he is developing simple and practical methods for reasoning about concurrent programs.

Click here to download materials.