14

Google Analytics and Advanced Cloud Ops

You have designed, developed, and deployed a world-class web application; however, that is only the beginning of the story of your app. The web is an ever-evolving, living, breathing environment that demands attention in order to continue to succeed as a business. In Chapter 13, Highly Available Cloud Infrastructure on AWS, we went over the basic concepts and costs of ownership of a cloud infrastructure.

In this chapter, we will dig deeper in to truly understanding how users actually use our application with Google Analytics. We will then use that information to create realistic load tests to simulate actual user behavior to understand the true capacity of a single instance of our server. Knowing the capacity of a single server, we can fine-tune how our infrastructure scales out to reduce waste and discuss the implications of various scaling strategies.

Finally, we will go over advanced analytics concepts, such as custom events, to gain a more granular understanding and tracking of user behavior.

In this chapter, you will learn about the following topics:

  • Collecting analytics
  • Budgeting and scaling
  • Advanced load testing to predict capacity
  • Reliable cloud scaling
  • Measuring actual use with custom analytics events

Throughout the chapter, you will be setting up Google Analytics, Google Tag Manager, and OctoPerf accounts.

The most up-to-date versions of the sample code for this book can be found on GitHub at the following repository link. The repository contains the final and completed state of the code. You can verify your progress at the end of this chapter by looking for the end-of-chapter snapshot of code under the projects folder.

In the case of Chapter 14:

  1. Clone the repo: https://github.com/duluca/lemon-mart.

    Execute npm install on the root folder to install dependencies.

  2. The code sample for this chapter can be found under the following subfolder:
    projects/ch14
    
  3. To run the Angular app for this chapter, execute the following command:
    npx ng serve ch14
    
  4. To run Angular unit tests for this chapter, execute the following command:
    npx ng test ch14 --watch=false
    
  5. To run Angular e2e tests for this chapter, execute the following command:
    npx ng e2e ch14
    
  6. To build a production-ready Angular app for this chapter, execute the following command:
    npx ng build ch14 --prod
    

    Note that the dist/ch14 folder at the root of the repository will contain the compiled result.

    Beware that the source code in the book or on GitHub may not always match the code generated by the Angular CLI. There may also be slight differences in implementation between the code in the book and what's on GitHub because the ecosystem is ever-evolving. It is natural for the sample code to change over time. Also, on GitHub, expect to find corrections, fixes to support newer versions of libraries, or side-by-side implementations of multiple techniques for the reader to observe. The reader is only expected to implement the ideal solution recommended in the book. If you find errors or have questions, please create an issue or submit a pull request on GitHub for the benefit of all readers.

    You can read more about updating Angular in Appendix C, Keeping Angular and Tools Evergreen. You can find this appendix online from https://static.packt-cdn.com/downloads/9781838648800_Appendix_C_Keeping_Angular_and_Tools_Evergreen.pdf or at https://expertlysimple.io/stay-evergreen.

Let's begin by covering the basics of web analytics.

Collecting analytics

Now that our site is up and running, we need to start collecting metrics to understand how it is being used. Metrics are key to operating a web application.

Google Analytics has many facets. The main three are as follows:

  1. Acquisition, which measures how visitors arrive at your website
  2. Behavior, which measures how visitors interact with your website
  3. Conversions, which measure how visitors completed various goals on your website

Here's a look at the BEHAVIOR | Overview page from my website https://thejavascriptpromise.com/:

Figure 14.1: Google Analytics behavior overview

https://thejavascriptpromise.com/ is a simple single-page HTML site, hence the metrics are quite simple. Let's go over the various metrics on the screen:

  1. Pageviews shows the number of visitors.
  2. Unique Pageviews shows the number of unique visitors.
  3. Avg. Time on Page shows the amount of time each user spent on the site.
  4. Bounce Rate shows that users left the site without navigating to a subpage or interacting with the site in any manner, such as clicking on a link or button with a custom event.
  5. % Exit indicates how often users leave the site after viewing a particular page or set of pages.

At a high level, in 2017, the site had about 1,090 unique visitors and, on average, each visitor spent about 2.5 minutes, or 157 seconds, on the site. Given that this is just a single-page site, bounce rate and % exit metrics do not apply in any meaningful manner. Later, we use the number of unique visitors to calculate the cost per user.

As a point of comparison, the LemonMart app from the book served 162,396 lemons between April 2018 and April 2020:

Figure 14.2: LemonMart behavior overview

In addition to page views, Google Analytics can also capture specific events, such as clicking on a button that triggers a server request. These events can then be viewed on the Events | Overview page, as shown:

Figure 14.3: Google Analytics Events Overview

It is possible to capture metrics on the server side as well, but this will give requests-over-time statistics. You will require additional code and state management to track the behavior of a particular user so that you can calculate users-over-time statistics. By implementing such tracking on the client side with Google Analytics, you gain a far more detailed understanding as to where the user came from, what they did, whether they succeeded or not, and when they left your app without adding unnecessary code complexity and infrastructure load to your backend.

Adding Google Tag Manager to your Angular app

Let's start capturing analytics in your Angular app. Google is in the process of phasing out the legacy ga.js and analytics.js products that are shipped with Google Analytics, replacing these with its new, more flexible global site tag, gtag.js, that ships with Google Tag Manager. This is by no means an end to Google Analytics; instead, it represents a shift toward an easier-to-configure and manage analytics tool. The global site tag can be configured and managed remotely via Google Tag Manager. Tags are snippets of JavaScript tracking code that is delivered to the client, and they can enable tracking of new metrics and integration with multiple analytics tools without having to change code that has already been deployed. You can still continue to use Google Analytics to analyze and view your analytics data. Another major advantage of Google Tag Manager is that it is version controlled, meaning that you can experiment with different kinds of tags that are triggered under various kinds of conditions without fear of doing any irreversible damage to your analytics configuration.

Setting up Google Tag Manager

Let's begin by setting up a Google Tag Manager account for your application:

  1. Sign in to Google Tag Manager at https://tagmanager.google.com/.
  2. Add a new account with a Web container, as follows:

    Figure 14.4: Google Tag Manager

  3. Paste the generated scripts at or near the top <head> and <body> sections of your index.html file, as instructed on the website:
    src/index.html
    <head>
    <!-- Google Tag Manager -->
    <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
    })(window,document,'script','dataLayer','GTM-56D4F6K');</script>
    <!-- End Google Tag Manager -->
    ...
    </head>
    <body>
    <!-- Google Tag Manager (noscript) -->
    <noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-56D4F6K" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
    <!-- End Google Tag Manager (noscript) -->
    <app-root></app-root>
    </body>
    

    Note that the <noscript> tag will only execute if the user has disabled JavaScript execution in their browser. This way, we can collect metrics from such users, rather than being blind to their presence.

  4. Submit and publish your tag manager container.
  5. You should see the initial setup of your tag manager completed, as shown in the following screenshot:

    Figure 14.5: Published tag

  6. Verify that your Angular app runs without any errors.

Note that if you don't publish your tag manager container, you will see a 404 error when loading gtm.js in the dev console or the Network tab.

Setting up Google Analytics

Now, let's generate a tracking ID through Google Analytics. This is a universally unique identifier for your app to correlate your analytics data:

  1. Log in to Google Analytics at https://analytics.google.com.
  2. Open the Admin console, using the gear icon in the bottom-left corner of the screen, as shown in the following screenshot:

    Figure 14.6: Google Analytics admin console

  3. Create a new analytics account.
  4. Using the steps from the preceding screenshot as a guide, perform the following steps:
    1. Add a new Property, LemonMart.
    2. Configure the property to your preferences.
    3. Click on Tracking Code.
    4. Copy the Tracking ID that starts with UA-xxxxxxxxxx-1.
    5. Ignore the gtag.js code provided.

With your tracking ID on hand, we can configure Google Tag Manager so that it can collect analytics.

Configuring Google Analytics Tag in Tag Manager

Now, let's connect our Google Analytics ID to Google Tag Manager:

  1. At https://tagmanager.google.com, open the Workspace tab.
  2. Click on Add a new tag.
  3. Name it Google Analytics.
  4. Click on Tag Configuration and select Universal Analytics.
  5. Under Google Analytics Settings, add a new variable.
  6. Paste the tracking ID you copied in the previous section.
  7. Click on Triggers and add the All Pages trigger.
  8. Click on Save, as shown in the following screenshot:

    Figure 14.7: Creating a Google Analytics tag

  9. Submit and publish your changes, and observe the version summary with one tag, as shown:

    Figure 14.8: Version Summary showing one tag

  10. Now refresh your Angular app, where you'll be on the /home route.
  11. In a private window, open a new instance of your Angular app and navigate to the /manager/home route.
  12. At https://analytics.google.com/, open the REAL-TIME | Overview pane, as shown in the following screenshot:

    Figure 14.9: Google Analytics REAL-TIME Overview

  13. Note that the two active users are being tracked.
  14. Under Top Active Pages, you should see the pages that the users are on.

By leveraging Google Tag Manager and Google Analytics together, we have been able to accomplish page tracking without changing any code inside our Angular app.

Search engine optimization (SEO) is an important part of analytics. To gain a better understanding of how crawlers perceive your Angular site, use the Google Search Console dashboard, found at https://www.google.com/webmasters/tools, to identify optimizations. In addition, consider using Angular Universal to render certain dynamic content on the server-side so that crawlers can index your dynamic data sources and drive more traffic to your site.

Budgeting and scaling

In the AWS Billing section of Chapter 13, Highly Available Cloud Infrastructure on AWS, we covered the monthly costs of operating a web server, ranging from $5/month to $45/month, from a single-server instance scenario to a highly available infrastructure. For most needs, budgeting discussions will begin and end with this monthly number. You can execute load tests, as suggested in the Advanced load testing section, to predict your per-server user capacity and get a general idea of how many servers you may require. In a dynamically scaling cloud environment with dozens of servers running 24/7, this is an overly simplistic way to calculate a budget.

If you operate a web property of any significant scale, things invariably get complicated. You will be operating multiple servers on different tech stacks, serving different purposes. It can be difficult to gauge or justify how much of a budget to spare for seemingly excess capacity or unnecessarily high-performance servers. Somehow, you need to be able to communicate the efficiency of your infrastructure given the number of users you serve and ensure that your infrastructure is fine-tuned so that you don't lose users due to an unresponsive application or overpay because you're using more capacity than you require.

For this reason, we will take a user-centered approach and translate our IT infrastructure costs to a per-user cost metric that the business and the marketing side of your organization can make sense of.

In the next section, we will investigate what it means to calculate the per-user cost of your infrastructure and how these calculations change when cloud scaling comes in to play using one of my websites as an example.

Calculating the per-user cost

We will be leveraging behavior metrics from Google Analytics with the goal of calculating the per-user cost over a given period of time:

Per-user cost is calculated as follows:

Using the https://thejavascriptpromise.com/ data from earlier, let's plug in the data to the formula to calculate perUserCost/month.

This website is deployed on an Ubuntu server on DigitalOcean, so the monthly infrastructure cost, including weekly backups, is $6 a month. From Google Analytics, we know there were 1,090 unique visitors in 2017:

In 2017, I have paid 7 cents per user. Money well spent? At $6/month, I don't mind it. In 2017, https://thejavascriptpromise.com/ was deployed on a traditional server setup as a static site that doesn't scale out or in. These conditions make it very straightforward to use the unique visitor metric and find the per-user cost. The very same simplicity that allows for an easy calculation also leads to a suboptimal infrastructure. If I were to serve 1,000,000 users on the same infrastructure, my costs would add up to $70,000 a year. If I were to earn $100 for every 1,000 users through Google ads, my site would make $100,000 per year. After taxes, development expenses, and our unreasonable hosting expenses, the operation would likely lose money.

If you were to take advantage of cloud scaling, where instances can scale out or in dynamically based on current user demand, the preceding formula becomes useless pretty quickly because you must take provisioning time and target server utilization into account.

Provisioning time is the amount of time it takes your cloud provider to start a new server from scratch. Target server utilization is the maximum usage metric of a given server, where a scale-out alert must be sent out so that a new server is ready before your current servers max out their capacity. In order to calculate these variables, we must execute a series of load tests against our servers.

Page views are an overly simplistic way to determine user behavior in SPAs such as Angular, where page views do not necessarily correlate to a request, or vice versa. If we execute load tests simply based on page views, we won't get a realistic simulation of how your platform may perform under load.

User behavior, or how users actually use your app, can drastically impact your performance forecasts and wildly fluctuate budget numbers. You can use Google Analytics custom events to capture complicated sets of actions that result in various types of requests served by your platform. Later in this chapter, we will explore how you can measure actual use in the Measuring actual use section.

Initially, you won't have any of the aforementioned metrics, and any metrics you may have will be invalidated any time you make a meaningful change to your software or hardware stack. Therefore, it is imperative to execute load tests on a regular basis to simulate realistic user loads.

Advanced load testing

In order to be able to predict capacity, we need to run load tests. In Chapter 13, Highly Available Cloud Infrastructure on AWS, I discussed a simple load testing technique of just sending a bunch of web requests to a server. In a relative comparison scenario, this works fine for testing raw power. However, actual users generate dozens of requests at varying intervals while they navigate your website, resulting in a wide variety of API calls to your backend server.

We must be able to model virtual users and unleash a whole bunch of them on our servers to find the breaking point of our server. OctoPerf is an easy-to-use service to execute such load tests, and it's located at https://octoperf.com. OctoPerf offers a free tier that allows for 50 concurrent users/test over unlimited test runs with two load generators.

OctoPerf is the ideal tool to get us quickly started with advanced testing capability. Let's create an account and see what it can do for us:

  1. Create an OctoPerf account.
  2. Log in and add a new project for LemonMart, as shown:

    Figure 14.10: Adding a project in OctoPerf

    OctoPerf allows you to create multiple virtual users with different usage characteristics. Since it is a URL-based setup, any click-based user action can also be simulated by directly calling the application server URL with test parameters.

  3. Create two virtual users, one as a Manager who navigates to manager-based pages, and a second as a POS user who sticks to POS functions.
  4. Click on Create scenario:

    Figure 14.11: POS User scenario

  5. Name the scenario Evening Rush.
  6. You can add a mixture of Manager and POS User types, as shown:

    Figure 14.12: Evening Rush scenario

  7. Click on the Launch 50 VUs button to start the load test.

    You can observe the number of users and hits/sec being achieved in real-time, as shown in the following screenshot:

    Figure 14.13: Evening Rush load test underway

  8. ECS service metrics also give us a high-level idea of real-time utilization, as shown in the following screenshot:

    Figure 14.14: ECS real-time metrics

  9. Analyze the load test results.

You can get more accurate results from ECS by clicking on the CPUUtilization link from ECS Service Metrics or by navigating to the CloudWatch | Metrics section, as follows:

Figure 14.15: AWS CloudWatch Metrics

As you can see in the preceding graph, CPU utilization was fairly consistent, at around 1.3%, given a sustained user load of 50 over a period of 10 minutes. During this period, there were no request errors, as shown in the statistics summary from OctoPerf:

Figure 14.16: OctoPerf Statistics summary

Ideally, we would measure maximum users/second until the moment errors were being generated. However, given only 50 virtual users and the information we already have, we can predict how many users could be handled at 100% utilization:

Our load test results reveal that our infrastructure can handle 3,846 users/second. Given this information, we can calculate cost per user in a scalable environment in the next section. However, performance and reliability go hand in hand. How you choose to architect your infrastructure will also provide important information in terms of budgeting, because the level of reliability you need will dictate the minimum number of instances you must keep around at all times.

Reliable cloud scaling

Reliability can be expressed in terms of your organization’s recovery point objective (RPO) and recovery time objective (RTO). RPO represents how much data you’re willing to lose, while RTO represents how fast you can rebuild your infrastructure in the event of a failure.

Let’s suppose that you run an e-commerce site. Around noon every weekday, you reach peak sales. Every time a user adds an item to their shopping cart, you store the items on a server-side cache so that users can resume their shopping spree later at home. In addition, you process hundreds of transactions per minute. Business is good, your infrastructure is scaling out beautifully, and everything is going smoothly. Meanwhile, a hungry rat or an overly charged lightning cloud decides to strike your data center. Initially, a seemingly harmless power unit goes down, but it’s fine, because nearby power units can pick up the slack. However, this is the lunch rush; other websites on the data center are also facing a high traffic volume. As a result, several power units overheat and fail. There aren’t enough power units to pick up the slack, so in quick succession, power units overheat one by one and start failing, triggering a cascade of failures that end up taking down the entire data center. Meanwhile, some of your users just clicked on Add to cart, others on the Pay button, and some others are just about to arrive on your site. If your RPO is 1 hour, meaning you persisted your shopping cart cache every hour, then you can say goodbye to valuable data and potential sales by those night-time shoppers. If your RTO is 1 hour, it will take you up to 1 hour to get your site back up and running again, and you can rest assured that most of those customers who just clicked on the buy button or arrived to an unresponsive site won’t be making a purchase on your site that day.

A well-thought-out RPO and RTO is a critical business need, but they must also be paired with the right infrastructure that makes it possible to implement your objectives in a cost-effective manner. AWS is made up of more than two dozen regions around the world, each region containing at least their availability zones (AZs). Each AZ is a physically separated infrastructure that is not affected by a failure in another AZ.

A highly available configuration on AWS means that your application is up and running on at least two AZs, so if a server instance fails, or even if the entire data center fails, you have another instance already live in a physically separate data center that is able to pick up incoming requests seamlessly.

A fault-tolerant architecture means that your application is deployed across multiple regions. Even if an entire region goes down due to a natural disaster, a distributed denial-of-service (DDoS) attack, or a bad software update, your infrastructure remains standing and is able to respond to user requests. Your data is protected by layer upon layer of security and via staggered backups of backups.

AWS offers great services, including Shield to protect against DDoS attacks targeted against your website, a Pilot Light service to keep a minimal infrastructure waiting dormant in another region that can scale to full capacity if needed, while keeping operational costs down, and a Glacier service to store large amounts for data for long periods of time in an affordable manner.

A highly available configuration will require a minimum of two instances in a multi-AZ setup at all times. For a fault-tolerant setup, you require two highly available configurations in at least two regions. Most AWS cloud services, such as DynamoDB for data storage or Redis for caching, are highly available by default, including serverless technologies such as Lambda. Lambda charges on a per-use basis and can scale to match any need you can throw at it in a cost-effective manner. If you can move heavy compute tasks to Lambda, you can reduce your server utilization and your scaling needs dramatically in the process. When planning your infrastructure, you should consider all these variables to set up the right scalable environment for your needs.

Cost per user in a scalable environment

In a scalable environment, you cannot plan on 100% utilization. It takes time to provision a new server. A server that is at 100% utilization can’t process additional incoming requests in a timely manner, which results in dropped or erroneous requests from the users’ perspective. So, the server in question must send a trigger well before it reaches 100% utilization so that no requests are dropped. Earlier in the chapter, I suggested a 60-80% target utilization before scaling. The exact number will largely depend on your specific choice of software and hardware stack.

Given your custom utilization target, we can calculate the number of users your infrastructure is expected to serve on average per instance. Using this information, you can calculate a more accurate cost per user, which should allow the correct sizing of your IT budget, given your specific needs. It is equally as bad to underspend as it is to overspend. You may be forgoing more growth, security, data, reliability, and resilience than may be considered acceptable.

In the next section, we will walk through the calculation of an optimal target server utilization metric so that you can calculate a more accurate per-user cost. Then, we will explore scaling that can occur during preplanned time frames and software deployments.

Calculating target server utilization

First, calculate your custom server utilization target, which is the point where your server is experiencing increased volumes of traffic and triggers a new server to provision with enough time so that the original server does not reach 100% utilization and drops requests. Consider this formula:

Let’s demonstrate how the formula works with the help of a concrete example:

  1. Load test your instances to establish user capacity per instance:

    Load test results: 3,846 users/second.

    Requests/sec and users/sec are not the same, since a user makes multiple requests to complete an action and may execute multiple requests/sec. Advanced load testing tools such as OctoPerf are necessary to execute realistic and varied workloads and measure user capacity over request capacity.

  2. Measure instance provisioning speed, from creation/cold boot to the request fulfilled first:

    Measured instance provisioning speed: 60 seconds.

    In order to measure this speed, you can put the stopwatch away. Depending on your exact setup, AWS provides event and application logs in the ECS Service Events tab, CloudWatch, and CloudTrail to correlate enough information to figure out when a new instance was requested and how long it took for the instance to be ready to fulfill requests.

    For example, in the ECS Service Events tab, take the target registration event as the beginning time. Once the task has been initiated, click on the task ID to see the creation time. Using the task ID, check the task’s logs in CloudWatch to see the time at which the task served its first web request as the end time and then calculate the duration.

  3. Measure the 95th percentile user growth rate, excluding known capacity increases:

    95th percentile user growth rate: 10 users/second.

    The 95th percentile is a common metric to calculate overall network usage. It means that 95% of the time, usage will be below the stated amount, making it a good number to use for planning, as explained by Barb Dijker in her article entitled What the heck is this 95th Percentile number?, available at http://www2.arnes.si/~gljsentvid10/pct.html.

    If you don’t have prior metrics, initially defining user growth rate will be an educated guess at best. However, once you start collecting data, you can update your assumptions. In addition, it is impossible to operate an infrastructure that can respond to any imaginable outlier without dropping a request in a cost-effective manner. Given your metrics, a business decision should be consciously made as to what percentile of outliers should be ignored as an acceptable business risk.

  4. Let’s plug in the numbers to the formula:

The custom target utilization rate, rounded down, would be 84%. Setting your scale-out trigger at 84% will avoid instances from being over provisioned, while preventing user requests from being dropped.

With this custom target utilization in mind, let’s update the per-user cost formula with scaling in mind:

So, if our infrastructure cost was $100 per month serving 150 users, at 100% utilization, you calculate the per-user cost to be $0.67/user/month. If you were to take scaling into account, the cost would be as follows:

Scaling without dropping requests would cost 16% more of the original $0.67 at $0.79 per user per month. However, it is important to keep in mind that your infrastructure won’t always be so efficient. At lower utilization targets, or when these are misconfigured with scaling triggers, costs can easily double, triple, or quadruple the original cost. The ultimate goal here is to find the sweet spot, meaning that you will be paying the right amount per user.

There’s no prescriptive per-user cost you should be targeting for. However, if you are running a service where you charge users $5 per month after all other operational costs and profit margins are accounted for, and you’re still left with a surplus budget and your users complaining about poor performance, then you’re underspending. However, if you’re eating into your profit margins or, even worse, only breaking even, then you may be overspending or you may need to reconsider your business model.

There are several other factors that can impact your per-user cost, including blue/green deployments, which we’ll cover in a moment. You can also increase the efficiency of your scaling by leveraging prescheduled provisioning.

Prescheduled provisioning

Dynamic scaling out and then back in is what defines cloud computing. However, the algorithms currently available still require some planning if you know that certain days, weeks, or months of a year will require uncharacteristically higher resource capacity. Given a sudden deluge of new traffic, your infrastructure will attempt to dynamically scale out, but if the rate of increase in traffic is logarithmic, even an optimized server utilization target won’t help. Servers will frequently reach and operate at 100% utilization, resulting in dropped or erroneous requests. To prevent this from happening, you should proactively provision additional capacity during such predictable periods of high demand.

Blue/green deployments

In Chapter 13, Highly Available Cloud Infrastructure on AWS, you configured no-downtime blue/green deployments. Blue/green deployments are reliable code deployments that ensure continuous uptime of your site, while minimizing the risk of bad deployments.

Let’s presume that you have a highly available deployment, meaning you have two instances active at any given time. During a blue/green deployment, two additional instances would be provisioned. Once these additional instances are ready to fulfill requests, their health is determined using your predefined health metric.

If your new instances are found to be healthy, this means they’re in working order. There will be a period of time, say 5 minutes, during which connections in the original instance are drained and rerouted to the new instances. At this time, the original instances are deprovisioned.

If the new instances are found to be unhealthy, then these new instances will be deprovisioned, resulting in a failed deployment. However, a service will remain available without interruption because the original instance will remain intact and keep serving users during the entire process.

Revising estimates with metrics

Load testing and predicting user growth rates give you an idea of how your system may behave in production. Collecting more granular metrics and data is critical in revising your estimates and nailing down a more accurate IT budget.

Measuring actual use

As we discussed earlier, keeping track of page views alone isn’t reflective of the number of requests that a user sends to the server. With Google Tag Manager and Google Analytics, you can keep track of more than just page views with ease.

As of the time of publication, here are some of the default events you can configure across various categories. This list will grow over time:

  • Page View: Used to track whether a user is sticking around as page resources load and the page is fully rendered:
    • Page View; fired at the first opportunity
    • DOM Ready; when the DOM structure is loaded
    • Window Loaded; when all elements are finished loading
  • Click: Used to track a user’s click interactions with the page:
    • All Elements
    • Just Links
  • User Engagement: Tracks user behavior:
    • Element Visibility; whether elements have been shown
    • Form Submission; whether a form was submitted
    • Scroll Depth; how far they scrolled down the page
    • YouTube Video; if they played an embedded YouTube Video
  • Other event tracking:
    • Custom Event; defined by a programmer to track a single or multistep event, such as a user going through the steps of a checkout process
    • History Change; whether the user navigates back in the browser’s history
    • JavaScript Error; whether JavaScript errors have been generated
    • Timer; to trigger or delay time-based analytics events

Most of these events don’t require any extra coding to implement, so we will implement a custom event to demonstrate how you can capture any single event or series of events you want with custom coding. Capturing workflows with a series of events can reveal where you should be focusing your development efforts.

For more information on Google Tag Manager events, triggers, or tips and tricks, I recommend that you check out the blog by Simo Ahava at https://www.simoahava.com/.

Creating a custom event

For this example, we will capture the event for when a customer is successfully checked out and a sale is completed. We will implement two events, one for checkout initiation, and the other for when the transaction has been completed successfully:

  1. Log on to your Google Tag Manager workspace at https://tagmanager.google.com.
  2. Under the Triggers menu, click on NEW, as indicated here:

    Figure 14.17: Tag Manager workspace

  3. Name your trigger.
  4. Click on the empty trigger card to select the event type.
  5. Select Custom Event.
  6. Create a custom event named checkoutCompleted, as illustrated:

    Figure 14.18: Custom Checkout event

    By selecting the Some Custom Events option, you can limit or control the collection of a particular event, that is, only when on a particular page or a domain, such as on lemonmart.com. In the following screenshot, you can see a custom rule that would filter out any checkout event that didn’t happen on lemonmart.com to weed out development or test data:

    Figure 14.19: Some Custom Events

  7. Save your new event.
  8. Repeat the process for an event named Checkout Initiated.
  9. Add two new Google Analytics event tags, as highlighted here:

    Figure 14.20: New custom event tags

  10. Configure the event and attach the relevant trigger you created to it, as shown in the following screenshot:

    Figure 14.21: Trigger setup

  11. Submit and publish your workspace.

We are now ready to receive custom events in our analytics environment.

Adding custom events in Angular

Now, let’s edit the Angular code to trigger the events:

  1. Consider the POS template with a checkout button:
    src/app/pos/pos/pos.component.html
    <p>
      <img
        src=”https://user-images.githubusercontent.com/822159/36186684-9f05fef8-110e-11e8-991f-fae6ca60fe5d.png” />
    </p>
    <p>
      <button mat-icon-button (click)=”checkout(currentTransaction)”>
        <mat-icon>shopping_cart</mat-icon> Checkout Customer
      </button>
    </p>
    

    The circular checkout button is indicated in the bottom-left corner of the following diagram:

    Figure 14.22: POS page with checkout button

    Optionally, you can add an onclick event handler directly in the template, such as onclick=”dataLayer.push({‘event’: ‘checkoutInitiated’})” on the checkout button. This pushes the checkoutInitiated event to the dataLayer object, made available by gtm.js.

  2. Define an ITransaction interface:
    src/app/pos/transaction/transaction.ts
    ...
    export interface ITransaction {
      paymentType: TransactionType
      paymentAmount: number
      transactionId?: string
    }
    ...
    
  3. Define a TransactionType enum:
    src/app/pos/transaction/transaction.enum.ts
    ...
    export enum TransactionType {
      Cash,
      Credit,
      LemonCoin,
    }
    ...
    
  4. Implement a TransactionService that has a processTransaction function:
    src/app/pos/transaction/transaction.service.ts
    ...
    @Injectable({
      providedIn: ‘root’,
    })
    export class TransactionService {
      constructor() {}
      processTransaction(transaction: ITransaction)
          : Observable<string> {
        return new
          BehaviorSubject<string>(‘5a6352c6810c19729de860ea’)
          .asObservable()
      }
    }
    ...
    

    ‘5a6352c6810c19729de860ea’ is a random string that represents a transaction ID.

    In PosComponent, declare an interface for dataLayer events that you intend to push:

    src/app/pos/pos/pos.component.ts
    ...
    interface IEvent {
    event: ‘checkoutCompleted’ | ‘checkoutInitiated’
    }
    declare let dataLayer: IEvent[]
    ...
    

    Import dependencies and initialize currentTransaction:

    src/app/pos/pos/pos.component.ts
    ...
    export class PosComponent implements OnInit, OnDestroy {
      private subs = new SubSink()
      currentTransaction: ITransaction
      constructor(
        private transactionService: TransactionService,
        private uiService: UiService
      ) {}
       ngOnInit() {
        this.currentTransaction = {
          paymentAmount: 25.78,
          paymentType: TransactionType.Credit,
        } as ITransaction
      }
      ngOnDestroy() {
        this.subs.unsubscribe()
      }
    ...
    

    Create the checkout function to call checkoutInitiated before a service call is made.

    Simulate a fake transaction using setTimeout and call the checkoutCompleted event when the timeout ends:

    src/app/pos/pos/pos.component.ts
    export class PosComponent implements OnInit {
    ...
      checkout(transaction: ITransaction) {
        this.uiService.showToast(‘Checkout initiated’)
        dataLayer.push({
          event: ‘checkoutInitiated’,
        })
        this.subs.sink = this.transactionService
          .processTransaction(transaction)
          .pipe(
            filter((tx) => tx != null || tx !== undefined),
            tap((transactionId) => {
              this.uiService.showToast(‘Checkout completed’)
              dataLayer.push({
                event: ‘checkoutCompleted’,
              })
            })
          )
          .subscribe()
    }
    

To prevent any data loss during your analytics collection, consider covering failure cases as well, such as adding multiple checkoutFailed events that cover various failure cases.

Now, we are ready to see the analytics in action:

Run your app.

On the POS page, click on the Checkout button.

In Google Analytics, observe the REAL-TIME | Events tab to see events as they occur.

After 5-10 minutes, the events will also show up under the BEHAVIOR | Events tab, as shown:

Figure 14.23: Google Analytics Top Events

Using custom events, you can keep track of various nuanced user behaviors happening on your site. By collecting checkoutInitiated and checkoutCompleted events, you can calculate a conversion rate of how many initiated checkouts are taken to completion. In the case of a point-of-sale system, that rate should be very high; otherwise, it means you may have systematic issues in place.

Advanced analytics events

It is possible to collect additional metadata along with each event, such as the payment amount or type, when checkout is initiated, or transactionId when checkout is completed.

To work with these more advanced features, I would recommend that you check out angulartics2, which can be found at https://www.npmjs.com/package/angulartics2. angulartics2 is a vendor-agnostic analytics library for Angular that can enable unique and granular event tracking needs using popular vendors, such as Google Tag Manager, Google Analytics, Adobe, Facebook, Baidu and more, as highlighted on the tool’s home page, shown here:

Figure 14.24: Angulartics2

angulartics2 integrates with the Angular router and the UI router, with the ability to implement custom rules and exceptions on a route-per-route basis. The library makes it easy to implement custom events and enables metadata tracking with data binding.

Check out the following example:

<div angulartics2On="click" angularticsEvent="DownloadClick"
  angularticsCategory="{{ song.name }}"
  [angularticsProperties]="{label: 'Fall Campaign'}">
</div>

We can keep track of a click event named DownloadClick, which would have a category and a label attached to it for rich event tracking within Google Analytics.

With advanced analytics under your belt, you can use actual usage data to inform how you improve or host your app. This topic concludes a journey that started by creating pencil-drawn mockups at the beginning of this book, covering a wide variety of tools, techniques, and technologies that a full-stack web developer must be familiar with in today’s web in order to succeed. We dove deep into Angular, Angular Material, Docker, and automation in general to make you the most productive developer you can be, delivering the highest quality web app, while juggling a lot of complexity along the way. Good luck out there!

Summary

In this chapter, you have rounded out your knowledge of developing web apps. You learned how to work with Google Tag Manager and Google Analytics to capture page views of your Angular application. Using high-level metrics, we went over how you can calculate the cost of your infrastructure per user. We then investigated the nuances of the effect that high availability and scaling can have on your budget. We covered the load testing of complex user workflows to estimate how many users any given server can host concurrently. Using this information, we calculated a target server utilization to fine-tune your scaling settings.

All of our pre-release calculations were mostly estimates and educated guesses. We went over the kinds of metrics and custom events you can use to measure the actual use of your application. When your application goes live and you start gathering these metrics, you can update your calculations to gain a better understanding of the viability and the affordability of your infrastructure.

Congratulations! You have completed your journey. I hope you enjoyed it! Feel free to use this book as a reference, including the appendices.

Follow me on Twitter @duluca and stay tuned to https://expertlysimple.io for updates.

Further reading

Questions

Answer the following questions as best as you can to ensure that you've understood the key concepts from this chapter without Googling. Do you need help answering the questions? See Appendix D, Self-Assessment Answers online at https://static.packt-cdn.com/downloads/9781838648800_Appendix_D_Self-Assessment_Answers.pdf or visit https://expertlysimple.io/angular-self-assessment.

  1. What are the benefits of load testing?
  2. What are some of the considerations as regards reliable cloud scaling?
  3. What is the value of measuring user behavior?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.249.105