
Simon Stewart(UK)
About the talk
Takeaways
- It’s a truism that test results take too long to come back to us. We want them faster!
- By re-ordering tests, and only running the ones we need, we can get the failure case more quickly.
- Ways to filter tests to just the ones we need: use AI, use statistics, and querying the build graph.
- AI seems attractive, and it’s an excellent pattern spotter, but it hallucinates, and we end up not running tests we should have.
- Statistical analysis of past test runs allows us to run the tests most likely to fail first, but we still need to run them all to find out if a change is safe.
- Querying the build graph allows us to identify the exact subset of tests that should be run.
- The downside? Getting a fine-grained build graph is hard with traditional test tools
Biography
Simon Stewart has been a professional software developer since before the millennium began. He was the lead of the Selenium project for over a decade and is the co-editor of the W3C WebDriver and WebDriver Bidi specs.
As well as browser automation, Simon is also interested in monorepos, blazing fast byte-for-byte reproducible builds, and scaling software development efficiently. He draws on his experience working in Open Source, ThoughtWorks, Google, and Facebook. He was the tech lead of Facebook’s build tool team, and is currently working on projects using Bazel, for which he’s the maintainer of several rulesets.
Simon lives in London with his family and dog.