Predicting the future is hard to do, especially when it comes to technology. People erroneously predicted the demise of the Internet and Apple, both of which are alive and well. Even so, predictions are everywhere around us on the web. Google predicts what we want to search for, predictive text helps us quickly compose text messages on our mobile phones, and Amazon makes predictions about items you might wish to purchase based on your previous purchases.
Technology has emerged which allows web applications to predict what content a user is likely to request next which can be used to speed up the delivery of pages. HTTP/2 offers push capabilities to send resources to the browser prior to a request being issued. There are also resource hints available to speed up delivery of web pages:
- DNS-prefetch resolves the domains of anticipated third-party objects to be loaded.
- Prefetch and preload retrieve objects prior to the browser requesting them.
- Prerender fully renders a page in the background before a request has been issued.
The cost of predicting incorrectly can be huge with these technologies, but the benefits if the predictions are right can also be even greater. In a Velocity talk, Yoav Weiss summarized the cost/benefit of these various technologies as follows:
|Resource hint||Cost if wrong||Benefit if right|
|Prefetch||Mid to high||High for next navigation|
|Prerender||Huge||Huge for next navigation|
Finding ways to make more accurate predictions will enable users to have a near-instantaneous response when loading pages. While a user is viewing a page the browser is idle. This is prime opportunity to start fetching resources needed on subsequent pages.
Latency is an ongoing struggle, especially on mobile networks. With HTTP/2, the browser reduces the negative effects of latency by multiplexing requests. Even so, once the page is loaded the browser sits idle. If the browser can predict what resources will be needed next, then it can mask latency by initiating the request prior to the user’s actually making it.
Prefetch and preload are newer standards and, as a result, not all browsers support these technologies. Preload applies to resources in the current navigation, while prefetch can be used for resources in subsequent navigations.
Aside from browser support, the larger issue with prefetching or preloading content is that it requires the developer to identify which resources should be retrieved prior to a request being issued. Humans are bad at making predictions – and remember, prefetching unneeded content is expensive.
From any given web page there are a number of paths a user can take. For example, while viewing a product on an eCommerce site, they might choose to click on a link to read product reviews; they might add the product to the shopping cart; or they might choose to view one of 4 similar suggested products. Each of these pages has different resources on them, so loading content for all six pages is not efficient.
At this point, analytics become your friend. They can help you identify which action the majority of users take and prefetch those resources. But what if the analytics show that only 20% of users perform that action and the remaining 80% choose a different path or leave the site? You have sent data that isn’t needed to 80% of your users. This isn’t efficient either.
Multi-page Predictive Prefetching from Instart Logic
Enter Instart Logic’s Multi-page Predictive Prefetching. Because of our unique client-side component, the Nanovisor, we can collect and analyze information from real user interactions.
Multi-page Predictive prefetching is a three step process. First, user actions are observed by the Nanovisor and sent back in the cloud to the Instart Logic application delivery platform. Using machine learning algorithms, resource impact and page transitions are analyzed to identify common static resources across user flows. Finally, the Nanovisor prepopulates the browser cache with the predicted objects.
To learn more about Multi-page Predictive Prefetching and to sign up for the beta contact your account manager or support.