Phoenix and Telemetry Integration

As companies started using Phoenix production, there was a growing need to get data out of their Phoenix applications. Because the Erlang VM provides a huge amount of insight about the running system, developers had to reimplement the infrastructure that collects metrics over and over again, and write the integration with their preferred metrics platforms.

Other teams would choose Application Performance Monitoring tools, such as AppSignal,[43] Scout App,[44] New Relic,[45] and others. However, when it came to tracking app specific data, each of those tools would have distinct APIs which you would need to learn.

The Elixir community decided to tackle this challenge by implementing the Telemetry toolset. With Telemetry, developers have a unified API for dispatching metrics and instrumentation.[46] Telemetry also provides a mechanism for collecting built-in VM metrics[47] and a shared vocabulary for consuming and reporting those metrics.[48]

You might wonder what all of this means for Phoenix developers. In future Phoenix versions, we will probably have a new lib/rumbl_web/telemetry.ex file that outlines all the metrics you may want to extract from your system and how they should be reported. At the moment, we don’t have all details in place, but the file may look like this:

1: defmodule​ RumblWeb.Telemetry ​do
use​ Supervisor
import​ Telemetry.Metrics
5: def​ start_link(arg) ​do
Supervisor.start_link(__MODULE__, arg, ​name:​ __MODULE__)
end
def​ init(_arg) ​do
10:  children = [
{​:telemetry_poller​,
measurements:​ periodic_measurements(),
period:​ 10_000},
{Telemetry.StatsD, ​metrics:​ metrics()}
15:  ]
Supervisor.init(children, ​strategy:​ ​:one_for_one​)
end
20: defp​ metrics ​do
[
# VM Metrics
last_value(​"​​vm.memory.total"​, ​unit:​ ​:byte​),
last_value(​"​​vm.total_run_queue_lengths.total"​),
25:  last_value(​"​​vm.total_run_queue_lengths.cpu"​),
last_value(​"​​vm.total_run_queue_lengths.io"​),
last_value(​"​​rumbl.worker.memory"​, ​unit:​ ​:byte​),
last_value(​"​​rumbl.worker.message_queue_len"​),
30: 
# Database Time Metrics
summary(​"​​rumbl.repo.query.total_time"​, ​unit:​ {​:native​, ​:millisecond​}),
summary(​"​​rumbl.repo.query.decode_time"​, ​unit:​ {​:native​, ​:millisecond​}),
summary(​"​​rumbl.repo.query.query_time"​, ​unit:​ {​:native​, ​:millisecond​}),
35:  summary(​"​​rumbl.repo.query.queue_time"​, ​unit:​ {​:native​, ​:millisecond​}),
# Phoenix Time Metrics
summary(​"​​phoenix.endpoint.stop.duration"​,
unit:​ {​:native​, ​:millisecond​}),
40:  summary(
"​​phoenix.route_dispatch.stop.duration"​,
unit:​ {​:native​, ​:millisecond​},
tags:​ [​:plug​]
)
45:  ]
end
defp​ periodic_measurements ​do
[
50:  {​:process_info​,
event:​ [​:rumbl​, ​:worker​],
name:​ Rumbl.Worker,
keys:​ [​:message_queue_len​, ​:memory​]}
]
55: end
end

The new file starts by defining a supervisor. The supervisor has two children. The first is a :telemetry_poller child that executes a list of measurements every 10 seconds. The second is a StatsD[49] reporter on line 14. In this case, StatsD is just an example for a metric aggregation tool. You may add others or you may completely replace it. As the community and more companies rely on Telemetry, we expect integration with many other reporters in the future.

You can also see two private functions in the module. The supervisor will invoke them when starting the supervision children as part of the init function. In the metrics function, we list all of the metrics we want the reporter to publish. These include VM metrics, such as memory usage and the run queue length—which shows how busy the machine is while performing I/O and CPU tasks. Then we list time measurements for database queries and Phoenix operations.

Finally, in the periodic_measurements function, we define all periodic measurements for a process. For example, you can collect information such as memory usage and message queue length from any process in the system. Other custom measurements, such as ets_info, may be available in the future. This can be useful to track memory usage per table, such as the ETS table used by the InfoSys system.

Though the final API may change a little or a lot, the goal of integrating Phoenix and Telemetry remains clear: help developers, teams, and companies extract the maximum amount of insight possible from their production systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.133.54