Managing Digital Experience Using Synthetic Monitoring

IT monitoring and management have traditionally been focused on an enterprise’s IT backbone; e.g. its data centers, servers, networks, etc. However, with more and more employees working from home these days, and customers or partners scattered around the world, organizations have all found it is critical to monitor and manage an extended network connection to ensure a supreme digital experience for their employees, customers, or partners.

Synthetic monitoring is an approach adopted by more and more organizations to proactively monitor the digital experience of websites, web services, or applications by simulating user requests to validate system availability and performance.

In this third article in our “Taming IT Chaos” blog series, we’ll introduce synthetic monitoring technology and how machine learning analytics can augment it.

Synthetic monitoring for different granularity

Synthetic monitoring can be applied in different granularities – from site level to application level to individual user level.

For organizations distributed across multiple locations, an option for synthetic monitoring is to deploy agent-based software in each of their locations. By leveraging simple technologies – like the Ping test, or SSH checks – organizations can start collecting metrics such as server connectivity, network latency, website response time, and more. These metrics give organizations an overview of how their network is performing collectively and what the general user experience looks like across different locations. 

With more advanced settings – such as running headless chromium – organizations can start testing the entire transaction flow of websites or web services. Remote agents can also be used to monitor the performance of designated application access points. This information can be useful in deciding how a certain application is serving users from different geographic locations.

Collecting individual user level metrics normally requires an application to bundle monitoring modules into its application package. The data collected from individual users will give the organization the most detailed information on how a user is using the application, and even be able to rebuild the user’s behavior for further analysis. 

Digital experience management using predictive analysis

With data collected from synthetic monitoring, there are many analyses that can be applied. These analyses range from topology-based performance analysis to user engagement simulation.

Synthetic monitoring data all have basic geographic information, and that geographic data can be used to generate network latency heatmap, as seen in the example below, to indicate potential areas that may have higher latency than usual.   

network latency heatmap

User engagement simulation is to simulate a user’s journey with a service and indicate the potential risk that may impact the user’s experience. For example, an organization can add scheduled ping checks to its Microsoft 365 subscription. Based on data collected, the organization can identify if there are locations that have a high probability of having high network latency during a certain time of the day. Correspondingly, the company can adjust its subscription by locations, or allocate more network resources to support a high traffic location. This can all be done proactively, without sacrificing any employees’ time or effort.

Combined with other network monitoring data, many more ML or AL-based analyses can be applied to drive deep insight into an organization’s digital health. 

In the following example, we outline how an eCommerce company can leverage synthetic monitoring technology to improve its product experience. The company intends to monitor its whole transaction experience from a few locations where most of its users are concentrated. 

Step 1: The company can deploy agents from these locations and use headless chromium to execute a script of predefined transactions, from search, browse, add to shopping cart to payment execution. 

Step 2: Based on the simulation, the latency of each phase can be collected at a certain frequency. 

Step 3: By applying anomaly detection and trend analyses, operational metrics such as web page drop off rate, shopping cart abandonment rate or payment failure rate can be mapped to the simulation data and predicted. If any metrics move beyond a healthy range, a warning will be sent and future investigation or human intervention is squarely due. 

While this is a very simple example, it demonstrates how synthetic monitoring can help improve an organization’s digital experience. Additionally, when combined with other technologies, it exemplifies how it can support the organization to build up an early warning mechanism to prevent potential havoc.

Ready to get started? Get in touch or schedule a demo.