
Apache Iceberg is a mature open table format that’s been battle-tested in the broader analytics world for years. Now it’s time to apply the benefits of an open and scalable standard to an observability field that badly needs to break out of its siloed heritage.
It isn’t that observability has entirely resisted standards. OpenTelemetry is a well-adopted model for collecting metrics, logs, and traces. But once that data lands, most stacks still fragment it into silos. Joining observability with business data typically means exporting, duplicating, or downsampling. It’s a costly and error-prone process that makes simple questions, such as “Which customers were affected by an outage?” or “What was the revenue impact?” into a bespoke data project.
Iceberg standardizes how large analytical data sets are stored and evolved on object storage, with ACID transactions, snapshot isolation, time travel, and schema evolution. It’s a neutral table layer that any compatible compute engine can use, including Spark, Flink, Trino/Presto, Dremio, and the major cloud data platforms. That turns telemetry into first-class data that lives alongside customer, finance, and product tables without endless copy pipelines.