Over the past few years almost all data processing has moved from batch to stream processing. This isn’t simply driven by a desire for lower latency, but by a fundamental understanding that streams are a more effective primitive for data processing, providing a better impedance match to varied downstream systems and services. Splunk, like many others, has been evolving its core data infrastructure to better provide a simpler and more consistent programming model, address correctness and latency of data, and allow for a more open integration model with our data platform. Throughout this process, we’ve come to view Apache Flink as a critical backbone in our core data infrastructure. Join us to learn more about how our data infrastructure - and how we think about it - has fundamentally changed.