Today Instart Logic is pleased to announce a new suite of predictive optimization features that include User Prioritization and One Time Cache. With a combination of machine-based learning and application knowledge, we can help organizations predict which pages to serve from cache and which users to allow access to the site — improving the overall user experience, helping organizations acquire new customers, protect their brand and increase revenues.
When acquiring new customers, organizations are faced with having to make difficult decisions between providing the fastest experience possible and a dynamic browsing experience, as performance is only one aspect of the user experience. News sites want to provide the latest news, not information that is hours old; eCommerce sites want to provide information on new or hot products which may change on an hourly, daily or weekly basis. But being able to provide dynamic content can take a toll on performance. We have all seen countless studies showing that improving performance can reduce bounce rates and increase conversions, but is that more important or less important than providing dynamic content. We say you no longer need to choose, you can have your cake and eat it too.
The problem with not optimizing HTML is that the browser is sitting idle while the dynamic content is generated by the backend servers. The first step in any online experience is the most impressionable, so there is an increasing need to put your best foot forward and optimize the first resource a client receives — the HTML. When the abandonment rate at the first step of the checkout is the #1 drop-off point, it’s an important user experience to tunnel in on. When done right, conversions can increase anywhere from 2-3% all the way up to 5%” according to KISS Metrics. While 2-5% increase may not seem like a lot; increasing conversions by 2-5% can mean millions of dollars in additional revenue for some retailers.
With HTML Streaming, Instart Logic has been able to optimize delivery of dynamic HTML content by streaming the non-unique content to the browser while the origin server generates the dynamic content. We now take this a step further with the introduction of One Time Cache, which allows for full page caching of Dynamic HTML. This innovative technology includes a predictive capability to learn and predict the need for future requests in advance of a user issuing a request.
Most websites these days are dynamic and highly personalized, which makes caching pages a challenge. I am guessing that my Twitter, Facebook, LinkedIn and Amazon home pages look very different than yours do. For example, my Amazon home page right now is telling me that I have 1 item in my shopping cart and is suggesting a variety of books and items based on my shopping history. But if I log out and delete my cookies, I get a very different version. Instead of personalized items based on my shopping history, there is a section on what other customers are looking at right now.
This section is dynamic and is constantly showing different items. Here is that same section a few minutes later:
Caching my personalized version of the home page doesn’t make sense, as that is information specific to me. Caching a page that is dynamic and is constantly displaying different content also doesn’t make sense — until today. One Time Cache makes it possible to deliver dynamic web pages from cache for a subset of customers.
How it Works
Previously the request response chain for dynamic HTML or non-cacheable content was sequential — a user requests HTML from the edge... the edge service forwards request to the origin servers... origin servers generate a response, which is sent to the edge service... and finally the edge service forwards a response to the end user. As content flows through our service, we are examining not only the content, but also identifying various access patterns. By examining request and response patterns, One Time Cache is able to predict access patterns and pipeline requests from the edge in parallel with client requests to supplement traditional cache timeouts and TTLs. For example, if the service detects that during peak business hours the home page has 10 requests per second, then during this time period it can request a page seemingly in parallel with the client request, ensuring that there are fresh versions of the page available on the service to be delivered as soon as the client request is received. The parallel requests enable a greater number of requests to be served from our distributed service, reducing the amount of time a browser sits idle while the origin servers generate the appropriate HTML.
If a page is not eligible for One Time Cache as the access patterns it can still be optimized via our HTML streaming functionality. The great thing is that our system will learn and figure this all our automatically.
One Time Cache will have the biggest impact on time to first byte and start render times, as the delays typically encountered in generating the HTML are eliminated. A customer recently implemented Instart Logic performance optimizations, including One Time Cache, and saw impressive improvement across key performance indicators on both desktop and mobile devices. Aggregate performance improvement for time to first byte (TTFB) metrics were 200%. (Please note: performance will vary on an application by application basis as well as which optimization features have been enabled.)
Moving beyond dynamic HTML to dynamic API requests
In addition to being able to store dynamic HTML, One Time Cache also applies to dynamic API traffic, where the service accelerates new single page apps and even native applications that make REST API calls. One Time Cache predicts the need for certain API responses, pre-fetches them and stores them for future user requests, ensuring each user gets a unique response from the edge.
To see what One Time Cache can do for your website, contact your account manager today.