Performance metrics with Apollo Engine

When your application is live and heavily used, you can't check the status of every feature yourself; it would lead to an impossible amount of work. Apollo Engine can tell you how your GraphQL API is performing by collecting statistics with each request that's received. You always have an overview of the general usage of your application, the number of requests it receives, the request latency, the time taken to process each operation, the type, and also each field that is returned. Apollo Server can provide these precise analytics, since each field is represented in a resolver function. The time elapsed to resolve each field is then collected and stored inside Apollo Engine.

At the top of the Metrics page, you have four tabs. The first tab will look as follows:

If your GraphQL API is running for more than a day, you'll receive an overview that looks like the one here. The left-hand graph shows you the request rate over the last day. The graph in the middle shows the service time, which sums up the processing time of all requests. The right-hand graph gives you the amount of errors, along with the queries that caused them.

Under the overview, you'll find details about the current day, including the requests per minute, the request latency over time, and the request latency distribution:

  • Requests Per Minute (rpm) is useful when your API is used very often. It indicates which requests are sent more often than others.
  • The latency over time is useful when the requests to your API take too long to process. You can use this information to look for a correlation between the number of requests and increasing latency.
  • The request-latency distribution shows you the processing time and the amount of requests. You can compare the number of slow requests with the number of fast requests in this chart.

In the right-hand panel of Apollo Engine, under Metrics, you'll see all your GraphQL operations. If you select one of these, you can get even more detailed statistics.

Now, switch to the Traces tab at the top. The first chart on this page looks as follows:

The latency distribution chart shows all the different latencies for the currently-selected operation, including the number of sent requests with that latency. In the preceding example, I used the postsFeed query.

Each request latency has its own execution timetable. You can see it by clicking on any column in the preceding chart. The table should look like the following screenshot:

The execution timetable is a big foldable tree. It starts at the top with the root query, postsFeed, in this case. You can also see the overall time it took to process the operation. Each resolver function has got its own latency, which might include, for example, the time taken for each post and user to be queried from the database. All the times from within the tree are summed up and result in a total time of about 90 milliseconds.

It's obvious that you should always check all operations and their latencies to identify performance breakdowns. Your users should always have responsive access to your API. This can easily be monitored with Apollo Engine.

Next, we'll see how Apollo Engine implements error tracking.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.255.127