Mesut Durukal

Speaker

Mesut Durukal

Mesut has BSc and MSc degrees from Bogazici University Electrical & Electronics
Engineering. He worked in Defense Industry for 7 years and managed the testing
activities in a multinational project.
Then, he worked for Siemens A.G for 4 years. He had the technical lead position in
Istanbul QA office where he managed 18 people in the global organization.
Currently, he is working for Rapyuta Robotics, Tokyo. In the robotics domain, his
expertise in test automation and the whole CI/CD pipeline maintenance.
He has proficiency in CMMI and PMP with experiences under his belt:
• Planning, Scheduling, Monitoring and Reporting, Audits and Reviews, RCA
• Process Improvement, Requirement Analysis, Shareholder & Risk Management

Title:Do Bugs Speak

Abstract:

Do bugs speak? 

Yes, they do. People speak different languages like English, German, French, Chinese etc. But is communication to bugs possible? It is important to understand them, because they really tell us something. There is valuable information underlying the defects of a software, and information mining from defects promises for improvements in terms of quality, time, effort and cost. 

Problem Definition 

A comprehensive analysis on all created defects can provide precious insights about the product. For instance; if we notice that a bunch of defects heap together on a feature, we can conclude that the feature should be investigated and cured. Or we can make some observations about the severity or assignee of similar defects. Therefore, there are some potential patterns to be discovered under defects. 

My Experiences 

1) As a first step of the bug management process that is applied in my study was to build a bug life cycle. After realizing that there are some inefficiencies in the workflow, we customized the state transition flow and added some extra states such as reopening a bug or final customization. (Visualization will be added in the report) 

2) To reduce the manual effort for the monitoring activity of bugs, I have implemented a code to automatically query APIs of issue tracking systems. In this way, bug resolution duration, bug ages, open/closed/resolved counts can be pulled. (details will be added in the report) 

3) To concentrate on the most prior bugs first, I constructed a dashboard including bug distribution across severity levels. In each sprint, we monitored the distribution of prior bugs among others. 

4) We added different distribution of bugs across various parameters such as component or testing type. For instance, if most of the bugs heap together on uploading features, we would check the health of deployment of the relevant component. Or in a sprint, if most of the bugs are related to documentation testing, we could get insights about the process of documentation or whether yaml generation is broken. 

5) We were concentrating on Escaped bugs not to let them go to production. 

6) I tried to adapt Machine Learning into bug management processes. (Applied techniques and accuracy results will be shared.) 

Usage of the outputs 

* After constructing bug distribution across age dashboards, CCB (Change Control Board) started to monitor them each sprint and did not let bugs to be open for a long time. 

*  Similarly, by checking bugs across priority tables, critical bugs were not allowed to go production. 

* If the gap between opened vs resolved bugs is detected to be getting wider, some sprints were assigned to only bug resolution. (POs did not define new features in such situation) 

* BAD METRICS! We avoided using a number of found bugs as a performance criteria for QA teams after we observed that people started to raise mostly cosmetic bugs. 

Wrap-up

Defect analysis is very important for QA people, and especially for QA managers. We utilize lots of aspects to get an idea about the product itself or our procedures. For instance while monitoring defect distribution across testing types, we will discuss how to get an idea about the quality of our testing approach. I.e whether we are applying all types in a balanced way. (functional, performance, documentation, etc.) Or over another graph, in which we track the gap between open defects and resolved defects, we will discuss what action items we can take when the gap widens. Finally, with ML assistance, we will see how we can reduce manual effort and cost. 

Results & Conclusion 

In this session, we discuss data mining from bugs and usage of ML in defect management. Objective of the study is: 

  • To present in which ways defects can be analyzed 
  • To present how ML can be used to make observations over defects 
  • To provide empirical information supporting (b) 

Lessons Learned 

After all the experience I have had to build a successful bug management process, I had insights about the most critical parts of building such as lifecycle: 

  • What kind of environment we should build to be able to extract hidden patterns and valuable information from defects. 
  • What kind of monitoring ways can be used to make the status visible in the most possible way. (Various pie charts, bar graphs and tables will be shown to demonstrate distribution of defects across different aspects. ) 
  • What can be the valuable information in each monitoring activity. 
  • How can we classify/cluster defects using NLP techniques over various algorithms (including SVM, Decision Trees, Ensemble Methods) with benchmarking and results (accuracy rates). 

The presentation aims to help also to attendees in terms of: 

  • Mine valuable data from defects 
  • Get insights about test cycles 
  • Reduce defect assignment errors 
  • Perform correct defect triage