Домой United States USA — software Using Ingest Pipelines to Enhance Elastic Observability Data

Using Ingest Pipelines to Enhance Elastic Observability Data

216
0
ПОДЕЛИТЬСЯ

This article focuses how to put ingest pipelines to real use: by enhancing observability data on the Elastic stack.
Join the DZone community and get the full member experience.
In a previous article, I had written about distributed tracing and how it can be implemented easily on the Elastic stack. I have used many observability platforms, including NewRelic, Splunk, and DataDog. All of them are very powerful platforms and have everything you would need for implementing full-stack observability for your applications. 
Elastic is generally used for fast and powerful content search and log aggregation and analysis, but it has gained popularity recently for full-stack observability as well. It has pretty much every feature you would want in an observability platform, including support for applications written on a modern tech stack, support for tracing/logging/metrics, powerful out-of-the-box agents, visualizations and dashboards, alerting, AI-based anomaly detection, and so on. 
The key advantages of Elastic observability, in my opinion, are:
In this article, we will focus on ingest pipelines.
Ingest pipelines are pre-indexing hooks provided by Elastic to perform transformations on your incoming documents. Once you create a pipeline and set it up for incoming documents, every document would go through the pipeline. Some examples of transformations include:
For the full list of processors that can be used in ingest pipelines, refer to the Elastic processor reference.
Let’s put ingest pipelines to some real use. We will create an ingest pipeline to enhance distributed trace data by calculating and adding an Apdex score for all transactions.
Pipelines can be created from the Kibana UI or through an API call.

Continue reading...