Paul Gerrard
Gerrard Consulting (UK)

Biography

Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. He has won several international awards for his contribution to the industry and programme-chaired several prestigious conferences.
What Testers Need from Machine Learning and Artificial Intelligence

Machine Learning (ML) and Artificial Intelligence (AI) are all the rage now – but how much of it is hype? Of course, great strides are being made in business applications in every domain. But what about testing? Most vendors are including ML/AI in their marketing collateral and a few are actually building intelligent features into their tools. The problem for the practitioner is everyone is doing it differently. There’s no common definition of ML/AI, there’s no common definition of ML/AI applications in testing and there’s no common terminology. So it’s hard to separate the hype from the hard fact. In this talk, Paul sets out a workable definition of ML/AI and identifies the tool features that a) would be most valuable to testers and b) amenable to ML/AI support. There are some limited constraints to what ML/AI can do – for example:

  •   They need data from production logging test definition and execution
  •   Models of system functionality mapped to data and covering tests
  •   Imagination

Right now, the low hanging fruit is in the packaged applications domain, but the future is bright if we can match ML/AI and data to our testing thought processes to build intelligent test assistants.

Takeaways:

  •   What is a useful definition of ML/AI in the context of testing?
  •   What features do testers need, that can be supported by ML/AI
  •   What is the future for ML/AI in testing?

About the half day Tutorial

    Problem Solving for Testers

    In some organisations, it is perfectly fine for testers to report failure as they experience them. To capture the details of behaviour that does not meet expectations, how to reproduce the problem and an assessment of severity and/or priority might provide enough information to allow developers to diagnose and debug the problem.

    But in many situations, this simply does not work. For example, in a company that builds hardware and writes their own firmware and application software to diagnose the source of a problem can be a difficult task. Where a device has many, many configurations or connects to a range of other hardware, firmware or software applications it might be impossible to reproduce the problem outside the test lab.

    In these situations – and they are increasingly common – the task of the tester is to look beyond the visible signs of failure and to investigate further: to narrow down possibilities; to identify and ignore misleading symptoms and to get to the bottom of the problem.

    In this tutorial, Paul explores how we can be deceived by evidence and how we can improve our thinking to be more certain of conclusions. You will learn more about the design of experiments, recognise what you can and cannot control, learn how to systematically diagnose the causes of failure and work as a team to problem solve more effectively.