Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited and the host of the Technology Leadership Forum. He was the Programme Chair for the 2014 EuroSTAR conference in Dublin and for several other conferences in the last decade. In 2010 he won the EuroSTAR Testing Excellence Award and in 2013 he won the inaugural TESTA Lifetime Achievement Award. He won the ISTQB Testing Excellence Award in 2018. He’s currently working with an Irish government agency to create a future skills framework for software testers.
What Testers Need from ML/AI
Machine Learning (ML) and Artificial Intelligence (AI) are all the rage now – but how much of it is hype. Of course, great strides are being made in business applications in every domain. But what about testing? Most vendors are including ML/AI in their marketing collateral and a few are actually building intelligent features into their tools. The problem for the practitioner is everyone is doing it differently. There’s no common definition of ML/AI, there’s no common definition of ML/AI applications in testing and there’s no common terminology. So it’s hard to separate the hype from the hard fact. In this talk, Paul sets out a workable definition of ML/AI and identifies the tool features that a) would be most valuable to testers and b) amenable to ML/AI support. There are some limited constraints to what ML/AI can do – for example:
They need data from production logging test definition and execution
Models of system functionality mapped to data and covering tests
Right now, the low hanging fruit is in the packaged applications domain, but the future is bright if we can match ML/AI and data to our testing thought processes to build intelligent test assistants.
What is a useful definition of ML/AI in the context of testing?
What features do testers need, that can be supported by ML/AI
What is the future for ML/AI in testing?