During this demo, you’ll learn how observability empowers organizations to gain actionable insights into their systems and processes, improving resilience, optimizing costs, and enabling faster, data-driven decisions across the software development lifecycle.
Delivery metrics are more than tracking tools—they drive continuous improvement, risk reduction, and operational efficiency. By integrating observability through OpenTelemetry and DORA metrics, teams can proactively identify bottlenecks, improve workflows, and deliver software faster without compromising quality.
Delivery metrics allow teams to measure performance, optimize processes, and collaborate effectively across departments. Metrics like deployment frequency and lead time for changes help organizations assess efficiency and identify improvement opportunities. DORA metrics, including change failure rate and mean time to recovery, are essential for reducing business risks by minimizing disruptions and speeding up recovery times.
These insights enable teams to make data-driven decisions that align development efforts with organizational goals. With continuous iteration, teams can deliver value faster while maintaining the reliability and security of their systems.
At Liatrio, we prioritize actionable observability, where metrics aren’t just collected—they drive continuous improvements. We emphasize fast feedback loops, small-batch delivery, and transparent collaboration. Data without action is just noise, which is why teams should use telemetry to proactively address inefficiencies before they impact end users.
Transformation begins by establishing baseline metrics and gradually adopting advanced practices like continuous delivery and automated security checks. Observability ensures that every step is tracked and prioritized for optimal results.
OpenTelemetry, a vendor-neutral observability framework, integrates seamlessly into monitoring setups like Grafana dashboards. This allows teams to track:
By correlating leading and lagging indicators, teams can proactively resolve issues and prevent major production disruptions.
Automated deployments are critical to achieving faster feedback and streamlined delivery. GitHub Actions, OIDC authentication, and tools like Terraform create secure deployment pipelines. Renovate Bot further enhances automation by managing dependency updates, minimizing manual intervention, and ensuring continuous delivery.
In the event of deployment issues, automated workflows create incidents and trigger GitHub-based resolutions. This process reduces mean time to recovery and ensures minimal downtime, keeping development cycles on track.
Observability isn’t about pointing fingers—it’s about creating safe systems where teams can learn, improve, and innovate. By tracking meaningful metrics, teams build trust, reduce delivery risks, and maintain a culture of continuous improvement.
Ready to optimize your delivery pipelines with actionable observability and data-driven decision-making? Contact Liatrio today to see how we can help transform your organization’s software delivery practices.
Hey, I'm Adriel. I'm a technical lead here at Liatrio, and today I'm going to demo how we observe delivery. We're going to talk about a few different things. First, we'll start with some slides giving the context of why observing delivery is important. We'll talk about some of our first principles at Liatrio, engineering defaults, and transformation. From there, we’ll jump into the demo, covering topics like OpenTelemetry, leading and lagging indicators (also known as DORA metrics), and extras such as OIDC authentication and automated dependency management. Let’s get started.First, let’s discuss why delivery metrics matter. Delivery metrics allow teams to quantify performance. This means teams can track their progress, set benchmarks, and identify areas for improvement. Continuous improvement becomes possible as they identify bottlenecks and inefficiencies, refine processes, optimize workflows, and enhance collaboration to accelerate delivery. Observing delivery also helps mitigate risks, especially with metrics like change failure rate and mean time to recovery (both part of the DORA metrics). These metrics help minimize business risks associated with service disruptions and downtime. Teams can make informed, data-driven decisions and align with business goals while improving collaboration across cross-functional teams, breaking down silos, and fostering a holistic approach to software delivery.This ties into some of Liatrio’s first principles, which our engineering defaults align with. We value small-batch delivery and fast feedback, foster a culture of experimentation and empowerment, and emphasize transparent communication (often overly so). We focus on flow and action because observability without action is just monitoring. To illustrate, let me share a quick story. If you’ve read The Phoenix Project, you might remember Brent—a rockstar employee but a bottleneck. In my career, I’ve been a "Brent" at times, causing bottlenecks. Observing and making data-driven decisions helped me personally identify areas for improvement and ensure I wasn’t slowing down teams I worked with.Let’s talk about transformation. Ideally, transformations follow a general timeline: first, identify where you are using baseline data. As the journey progresses, teams observe themselves and see what matters most to them. Over time, they drive positive change, such as increased deployment frequency and reduced mean time to restore. Regular reflections allow teams to evaluate their progress, identify strengths, and find areas for improvement. These reflections align with the engineering defaults. Teams may reorder priorities based on how they’re performing and the most pressing business problems at hand. Generally, teams start with small tasks, pair programming, and ownership of quality. As they improve, they may focus on trunk-based development, test-driven development, and fast automated builds. Eventually, they work toward continuous delivery, automated deployments, built-in security, and production readiness.To measure these defaults, we rely on key metrics like DORA. These include lead time for changes, deployment frequency, change failure rate, and mean time to restore. DORA metrics are lagging indicators, meaning they show what has already happened. On the other hand, leading indicators help predict future performance and include metrics like work cycle time, branch and pull request data, contributor counts, code coverage, and code quality. DevX (developer experience) metrics are also essential, covering things like onboarding time, adoption rates, and contributions. Additionally, we track interpersonal metrics, such as the joy index or business OKRs, to ensure teams are aligned with engineering defaults.Now, let’s dive into the demo, starting with OpenTelemetry. OpenTelemetry is a framework and set of tools for observability. It’s the second-most actively contributed project in the CNCF, after Kubernetes. OpenTelemetry is vendor-agnostic, allowing you to switch providers without altering your underlying infrastructure. At Liatrio, we heavily support OpenTelemetry and leverage it to display both leading and lagging indicators. One of the key components of this demo involves Grafana dashboards, where we can visualize our delivery metrics.In the delivery metrics dashboard, we track leading indicators like branch count and age, contributor counts, and open pull request age. For example, a high branch count or long branch age could indicate cognitive overhead, lack of cleanup, or technical debt. If you correlate branch age with open pull request age, you might uncover bottlenecks, which could be solved with pair programming or better review processes. The DORA demo dashboard displays lagging indicators, such as deployment frequency, lead time, and change failure rate. In this demo, limited data means not all metrics are fully populated, but real repositories would provide more insights.For this demo, we’re using a Terragrunt repository with GitHub Actions for deployments. The repo handles a Terraform Lambda module deployment to AWS using OIDC authentication to securely access AWS without storing credentials in the repository. We’ll perform a demo deployment, where we downgrade the version and commit the change. The workflows will run, and if the deployment fails, an incident issue will be automatically created in the GitHub project. If we succeed, the change will deploy, and we’ll see the updated version reflected in the application.For example, we downgraded from Go to Python in this scenario. If it causes an issue, we’d create an incident in the repository and use automated tools like Renovate Bot to update dependencies and correct the version. Once the issue is resolved, we’d see changes reflected in the DORA dashboard, including updated deployment frequency and time-to-restore metrics. Under the hood, this process leverages GitHub workflows and OpenTelemetry for collecting event logs and sending them to an OpenTelemetry collector. We also have resources, documentation, and a reference architecture available to help organizations implement observability practices.In summary, this demo shows how to observe leading and lagging indicators, leverage tools like Renovate Bot and OpenTelemetry, and use data to drive continuous improvement. We’ve also emphasized the importance of safely observing human systems to foster trust and safety among teams. If you have any questions, feel free to reach out. Thanks for joining me, and we’ll catch you in the next session.