KEYNOTE SPEECH: Observability: What, why and how (on a shoestring budget)
Observability is a mix of art and science. There is a science to how you shape, collect, and visualise telemetry from your complex software systems. A few characteristics of this telemetry, or more simply data, can have an outsized impact on the experience users will have when supporting and interrogating it. At the same time, there is an art to collecting data in a way that balances protecting user privacy, managing efficient budgets, and still supporting product and engineering needs.
The reason we talk about observability is because building any software requires feedback loops to understand the success or failure of your features. Building software that enables its users to in turn build their own bespoke software adds a secondary level and new requirements for observability.
In this talk, Abby will dive deep into the whys behind the characteristics of telemetry which best support observable systems before bringing everyone on her journey building Kratix, a Free and Open Source (FOSS) framework for building bespoke internal platforms. Instead of painting the rosy (and unrealistic) picture of an ideally instrumented code base with the perfect data visualisations, Abby will share the story of her journey at a seed stage startup and how a test conscious team balanced the cost and benefit of building in quality attributes early.
HALF DAY TUTORIAL: Introduction to TraceTest: Automated assertions against telemetry data
Automation testing hasn’t changed much in the last decade. Selenium was first popularised in 2011. Docker became a household name just two years later in 2013. Of course we have improved our code quality and speed of test runs which in turn enables more testing with the same CI time commitment. But, we have continued to refine the same basic options of unit, integration, API, and UI testing.
This workshop introduces a new type of testing, trace based testing. Tracing is a type of telemetry that can enable powerful production debugging built on a growing quality of auto-instrumentation. Beyond auto-instrumentation, creators can also include key domain specific data that helps provide deeper insights. Traces are like logs in that they are produced every time your application is exercised. This means that whether you are running an automated suite, an exploratory session, a performance test, or customers are just generally using your application in production, you are generating trace data. Since each scenario produces this data, any assertions you write against trace data to validate the behaviour of your application can be used. This means any of these diverse scenarios can provide feedback, all the way from your local machine through to a distributed and scaled production environment.