Meet the researchers: Jens Dietrich - Bugging out

Jens Dietrich and Li Sui look at a whiteboard with drawing of circles an arrows

Software problems do more than slow your computer down.  Your car won’t start, and for those who can remember back to Y2K, there’s the general fear that everything could suddenly screech to a grinding halt.

And it’s not a thing of the past. Just ask Boeing, whose profit dropped 21 percent in the first three months of 2019 after software issues grounded its fleet of 737 Max jets.

The people stopping all that from happening are software engineers, who use programme analysis methods to detect software bugs so they can be fixed before they cause these problems.

SfTI Seed researcher, Associate Professor of Victoria University of Wellington Jens Dietrich is a software engineer focusing on bridging the gap between the yin and yang of programme analysis – static and dynamic programme analysis.

. . . bridging the gap between the yin and yang of programme analysis . . .

“When using dynamic programme analysis to test, you execute the programme and observe whether it behaves as expected. The problem with that is no matter how much you test, you’re facing a mathematical explosion of possibilities, so you’re really just testing an approximation of how your software is going to behave.

“In static programme analysis you’re building a model of your software and describing it in mathematical terms so you can reason about how it will behave. The problem with that is you get a lot of false positives or ‘crying wolf syndrome’, where the analysis tells you something’s bad when it really isn’t,” Jens says.

Translated into the real world, software false positives can have deadly consequences. Aviation incident reporting statistics show that pilots often ignore various warning sounds made by aircraft because they frequently go off inadvertently.

In 2005 a 737 crashed into a mountain north of Athens killing 121 passengers and crew after a cabin pressure warning was ignored leading to loss of oxygen in the cockpit.

. . . a cabin pressure warning was ignored leading to loss of oxygen in the cockpit.

“But despite these limitations, static program analysis does provide excellent value for money, and tech companies like Facebook, Google and Uber have started using it on a large scale," Jens says.

A lesser known problem is that static analyses can also miss bugs.

“A lot of modern software is highly modular, with bits being used in various ways at the same time. Your phone might be a health tracker, a camera and a music device as well as an actual phone. Developers are increasingly using magic code to write software that can be used as a module within many completely different applications, and this is exactly where static analysis struggles to find bugs. Understanding this magic code and the impact it has on our ability to find bugs is the focus of this project.”

Jens (left) and team member Li Sui. Their research builds on groundwork done with funding from US-based tech giant Oracle.

Jens’ research is using a benchmark data set of software programmes created in the commonly used computer language Java that was collated by New Zealand researchers in 2011. He and team members Amjed Tahir, Li Sui and Michael Emery, all from Massey University, are blending the two types of programme analysis to find inconsistencies, and then precisely map out the analysis blind spots.

When that work is compared with regular static programme analysis, there’s a pretty big difference in effectiveness.

“Our work has shown that mainstream static analysis on its own misses about 10 percent. While this doesn’t sound like much, the impact can be tremendous as that 10 percent might be where the most critical system bugs are hiding out. The magic code not covered by current analyses also tends be the part of the programs that have vulnerabilities that can be exploited by hackers,” Jens says.

“Even so, the preliminary results are that by using this hybrid approach to programme analysis, you’re going to find a lot more bugs and vulnerabilities.”

Another surprising finding of the experiments has been the human vs robot challenge.

“You’d probably expect that artificial intelligence programmes would be better at finding bugs than humans, but our study has revealed that’s not yet the case. Analysis programmes can turn up high numbers of issues, but when you look at what they find, it’s pretty shallow compared to people, who tend to find the things that are important.

“So if you don’t want everything to stop working, software engineers are critical,” Jens says.

 

" . . . software engineers are critical."

“The tools are still way behind.”

Date posted: 20/06/2019

Twitter