📈 Monitoring a Camel application with Prometheus
Introduction
Why monitoring?
In software engineering monitoring plays an important role in fault detection, performance improvements, the health of an application, resource planning, and based on the previous continuous improvements of the application. Having an application in a live environment it’s important to have observability to understand how our application behaves and also for a quick response.
How can we do it?
We have several ways and tools at our disposal that we can use to monitor an application:
- Logging
- Collect metrics from the application
- Alerts
- Visualization of overall health and performance of the application
There are tools for all tastes and styles that cover all the topics above, in the article I’ll focus on the last 3 and we’re going to use Prometheus and Grafana to accomplish this.
What is Prometheus?
Prometheus is an open-source tool written in Go and created by SoundCloud to improve the reliability of their services, and they always return to the same topic, observability. Prometheus has a built-in time series database, powered by query language (PromQL), alerts and alert manager, and a pull-based metric collection mechanism.
If you are interested in the story of Prometheus, you can check Prometheus: The Documentary .
What is Grafana?
Grafana is an open-source analytics and visualization platform designed for monitoring and observability. Users may query, visualize metrics, and create dashboards and alerts. Connects to various data sources, but It’s a perfect marriage for Prometheus.
What is Camel?
Camel is an open-source integration Java framework that enables you to easily integrate various systems consuming or producing data. But I’m not going deep into Camel, as I believe if you are reading this you already know Apache Camel . In this article, I’m having the Camel running inside a Spring Boot application to take advantage of all the benefits of developing with Spring Boot.
Hands-on
Spring Boot Actuator is a set of features provided to monitor and manage applications. It includes built-in endpoints and features that allow to monitor and interact with the applications at runtime. It collects and exposes various metrics about the application’s performance and behavior, such as memory usage, CPU usage, garbage collection statistics, HTTP request metrics, etc.
Metrics endpoint
To use Spring Boot Actuator you need to add to your pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
or on your build.graddle
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
}
to have the HTTP endpoints active you also need to add on your pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
or on your build.graddle
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
}
By default, the Spring Actuator will have the following endpoints active under the /actuator
{
"_links": {
"self": {
"href": "http://localhost:8080/actuator",
"templated": false
},
"health": {
"href": "http://localhost:8080/actuator/health",
"templated": false
},
"health-path": {
"href": "http://localhost:8080/actuator/health/{*path}",
"templated": true
}
}
}
If you want to expose other endpoints you need to declare them on your application.properties
, e.g.:
management.endpoints.web.exposure.include=health,metrics
{
"_links": {
"self": {
"href": "http://localhost:8080/actuator",
"templated": false
},
"health": {
"href": "http://localhost:8080/actuator/health",
"templated": false
},
"health-path": {
"href": "http://localhost:8080/actuator/health/{*path}",
"templated": true
},
"metrics-requiredMetricName": {
"href": "http://localhost:8080/actuator/metrics/{requiredMetricName}",
"templated": true
},
"metrics": {
"href": "http://localhost:8080/actuator/metrics",
"templated": false
}
}
}
You can check more about it on Spring Boot Actuator documentation .
In the /actuator/metrics
you can see all the metrics available and you can check the metrics on the endpoint /actuator/metrics/{requiredMetricName}
, e.g.:
for /actuator/metrics/jvm.classes.loaded
we’ll have a response similar to this:
{
"name": "jvm.classes.loaded",
"description": "The number of classes that are currently loaded in the Java virtual machine",
"baseUnit": "classes",
"measurements": [
{
"statistic": "VALUE",
"value": 9611
}
],
"availableTags": []
}
You can also custom name on the path, instead of /actuator
, I prefer the /monitoring
.
management.endpoints.web.base-path=/monitoring
But Prometheus doesn’t use this format to collect metrics, so we need to add the icrometer-registry-prometheus
dependency to have an endpoint compatible with Prometheus .
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
and also add on your application.properties
management.endpoints.web.exposure.include=health,metrics,prometheus
So calling on our browser the http://localhost:8080/monitoring/prometheus we’ll get:
# HELP http_server_requests_active_seconds_max
# TYPE http_server_requests_active_seconds_max gauge
http_server_requests_active_seconds_max{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.027983583
# HELP http_server_requests_active_seconds
# TYPE http_server_requests_active_seconds summary
http_server_requests_active_seconds_active_count{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 1.0
http_server_requests_active_seconds_duration_sum{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.02796725
Right now our application is ready to integrate with Prometheus, so let’s do it.
Prometheus Configuration
So let’s create the Prometheus configuration file, the prometheus.yml.
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['127.0.0.1:9090']
- job_name: 'my-app'
metrics_path: '/monitoring/prometheus'
static_configs:
- targets: ['my-app:8080']
We have defined the interval that Prometheus will collect metrics, in the example above it’s in every 15 seconds. Then we have defined 2 jobs:
- For Prometheus itself;
- For our application, note that we’ve defined the endpoint to collect the metrics (
/monitoring/prometheus
). Note that we’ve defined themetrics_path
for themy-app
and not forprometheus
, becauseprometheus
uses the default endpoint/metrics
.
version: '3.8'
networks:
metrics:
name: metrics
services:
my-app:
build:
context: .
dockerfile: Dockerfile
container_name: my-app
ports:
- "8080:8080"
networks:
- metrics
volumes:
- ./files:/usr/app/files
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/config:/etc/prometheus
- ./prometheus/database:/prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
networks:
- metrics
Camel Metrics
To collect Camel metrics we’ll need to tweak the CamelContext
, so we’ll provide:
- a route policy factory that creates Micrometer route policies for monitoring Camel routes
- and a message history factory that creates
Micrometer
message history for monitoring message flows in Camel routes. As an example:
@Configuration
public class CamelConfiguration {
@Bean
public CamelContextConfiguration camelContextConfiguration() {
return new CamelContextConfiguration() {
@Override
public void beforeApplicationStart(CamelContext camelContext) {
camelContext.addRoutePolicyFactory(new MicrometerRoutePolicyFactory());
camelContext.setMessageHistoryFactory(new MicrometerMessageHistoryFactory());
}
@Override
public void afterApplicationStart(CamelContext camelContext) {
}
};
}
}
Here are some of the metrics available by default:
{
"names": [
"camel.exchanges.external.redeliveries",
"camel.exchanges.failed",
"camel.exchanges.failures.handled",
"camel.exchanges.succeeded",
"camel.exchanges.total",
"camel.routes.added",
"camel.routes.reloaded",
"camel.routes.running",
...
]
}
Metrics
Prometheus has 4 types of metrics:
- Counter
- Gauge
- Histogram
- Summary To understand more about these types of metrics please refer to Prometheus documentation .
Create custom metrics
You create your metrics, directly on the Camel route or by code.
from("file://somecrazydirectory")
.process(processor)
.to("micrometer:counter:app.crazy.metric?increment=1")
.to("mock:result");
@Autowired
private MeterRegistry meterRegistry;
...
var counter = Counter.builder("app.crazy.metric")
.description("My Crazy Metric")
.register(meterRegistry);
counter.increment();
...