Software testing expert Mary Lou Soffa delivered the Mary Jean Harrold Memorial Distinguished Lecture on Nov. 8. Soffa was Harrold’s advisor and discussed the advancements the two made in the field in her talk, Software Testing: And the Challenges (and Opportunities) Keep Coming!
Now in its fifth year, the annual event honors former School of Computer Science (SCS) Professor Mary Jean Harrold by inviting prominent women in computer science to share their work and research philosophies.
“Mary Jean was the third most prolific software engineer, but she also just contributed with her heart and soul to diversity and service of every aspect of the academic community,” SCS and Georgia Tech ADVANCE Professor Dana Randall said in her introduction.
For Soffa, the lecture was personal and a chance to pay tribute to someone whose career ran in tandem with hers.
“Mary Jean was my Ph.D. student, and she was also my dearest friend for a number of years,” Soffa said.
As the Owen R. Cheatham Professor of Sciences in the Computer Science Department at the University of Virginia, Soffa has made her career in software testing, program analysis, warehouse-scale computing, software systems for multi-core architectures, and compiler optimization.
For her talk, Soffa traced the evolution of testing. The idea of separating debugging from testing started in the 1970s. Testing only became a critical part of software development in the 1980s and 1990s, according to Soffa.
“The software industry uses testing as primary way of ensuring software behaves the way we would like it behave and has the quality we want it to have,” Soffa said.
Her lecture broke down software testing field through four concepts:
- coverage criteria, determining how much of the code has been used in the test suite
- regression testing, making sure testing doesn’t make program behave incorrectly and doesn’t adversely impact code or structure
- input generation and test case prioritization and minimization, selecting the minimum number of text cases you need to meet all requirement
- testing oracles, checking the correctness of test cases
All these advances have made it easier for software testing to work with new technologies, such as the cloud or machine learning (ML), Soffa noted. Yet there are still challenges.
“If you make a change in the cloud environment, how do you retest your application?” she said. “If you’re running an app, you have to make sure there’s no change and expand accuracy with low testing costs.”
ML is a completely new environment because it’s based on data, not code, so software engineers must develop a new systematic way of testing that ensures coverage. This is vital as innovations like autonomous vehicles get closer to market.
Soffa underscored the importance of this field by explaining the recent Boeing 737 Max crashes as a failure of software testing. When the plane’s engines were made larger for fuel efficiency, it fundamentally changed the overall design of the plane and its center of gravity. To compensate, Boeing relied on new software. Yet this software wasn’t consistently tested and often by those underqualified to handle such a complex design.
“Is this what we would expect in testing a critical system?” she said. “Perhaps using inexperienced software developers and testers is not a good idea."