Вы находитесь на странице: 1из 14

FRONT END OPTIMIZATION:TRADE-OFFS

YOU'RE FORCED TO MAKE


BY KYLE GENTRY

You cant have it all so the saying goes and when it comes to application optimization, that is typically true. There are
trade-offs that are made when deciding what optimization to deploy. Three of the main aspects when optimizing an
application to consider are:
1. What performance impact an optimization will have. Is the optimization going to help the majority of customers, the
majority of content, and will it cause a double-digit performance improvement?
2. How reliable the optimization is. Will implementing a new technique cause an application to break? Does the
optimization cause some pages to load faster and other pages to load slower, or have first-time visits improved but
repeat visits become slower?
3. How much time and effort is required for implementation. Will you have to spend time creating a whitelist or blacklist to
ensure the site continues to function for all users? What happens when you change your site? Do you have to
reconfigure all of the optimizations?
When it comes to optimization techniques, the list of what can be done is constantly growing, and not each technique has
the same ROI.

The grid above attempts to chart some of the most common optimizations available through FEO- and server-based along
three variables. The y-axis is the measurement of the performance impact, the x-axis is the measurement of the reliability of
the implementation, and the size of the dot reflects the amount of time required for implementation (the larger the dot, the
more time-consuming).

LOW PERFORMANCE, LOW RELIABILITY


These features have minimal performance impact and implementing them can result in adverse consequences. The goal of
in-lining and concatenation of resources is to reduce the number of round trips required to load a page, as generally the more
round trips required, the longer it takes to load a page. But these techniques only apply to first-time visitors, or those with an
empty cache. The bigger downside of these techniques is that they can break caching for repeat visitors or a subsequent
page views in a browsing session.
Concatenation is the process of combining multiple JavaScripts (JS) or Cascading Style Sheets (CSS) into a single file. With
concatenation, each page may have a different combination of JavaScript files. This means that files that were downloaded
need to be downloaded multiple times and cant be served from cache. The other problem with concatenation is that
combining all resources into a single file may not always improve performance. The example below shows how concatenating
the JS and CSS actually made performance worse. Figure 1 shows the application prior to concatenation, and Figure 2
shows performance after concatenation.

Figure 1: Prior to concatenation

Figure 2: After concatenation

Concatenation reduced the number of round trips from 48 down to 21, which is a large reduction in the number of round trips.
The problem is that the JavaScript file is now 600 Kb and blocks other objects from downloading resulting in a higher
average page load time and a longer time to start render. Having the content split across multiple connections results in a
better experience for users.
Similarly, in-lining resources can also negatively impact caching. In-lining takes external files and embeds them in the HTML
to eliminate the round trip. HTML is typically not cached by a browser and is retrieved for each request, resulting in repeat
viewers getting worse performance after in-lining is implemented. Many people will choose to only in-line smaller files, but this
adds layers of complexity to determining the appropriate threshold and the creation of the whitelist.
With
increased
adoption
of
HTTP/2,
both
of
these
techniques
will
be
deprecated,
as
concurrency is a core feature of the HTTP/2 specification. Spending time to implement techniques that will not be needed in
an HTTP/2 world does not seem worth the effort, especially given the low performance impact.

HIGH PERFORMANCE, LOW RELIABILITY


These optimizations have a higher performance impact but still have challenges in terms of reliability. While all of these
optimizations attempt to improve the performance of images, they all go about it in different ways. Improving the performance
of images often results in high performance gains, as they make up such a large percentage of page weight. The time
required for implementation may be worth the performance gains. One thing to remember with each of these is the ongoing
care and feeding that is required; any web site change means re-testing and potentially rewriting code to execute correctly.

Image sprites were one of the first optimizations on the scene. The goal was to reduce the number of round trips by
combining all the images into a single file and use CSS pointers to tell the browser which part of the image to display. Having
to create a new sprite and update the CSS when images change on your application reduces the reliability of images on your
site changing frequently.
Prefetching images is anticipating what a user will click on next and populating the browser's cache with that content. As
images are highly cacheable, they are frequently pre-fetched. Requesting images before a user needs them can speed up
performance if the prediction was correct. Predicting human behavior is not easy; you may end up sending content to the
user that wasnt needed, wasting precious bandwidth. It is difficult to reliably predict what a user will do next after viewing a
web page.
Lazy-loading of images is the opposite of pre-fetching content. Instead of loading images on the page, the browser defers
loading of the resource until a later time. This technique is used to first populate only the content that appears above the
fold images that are outside of the viewport are not loaded immediately. If a user navigates away from a page prior to the
image loading, this technique can also serve to reduce overall server load. Predicting what is in the viewport, though, is not
always successful, resulting in images that should load not being visible.

LOW PERFORMANCE, HIGH RELIABILITY


Optimizations in this category provide lower performance benefits but are generally considered safe. Reasons for lower
performance benefits are that the optimizations only apply to a small subset of users or to a small number of resources.
Domain sharding was popular back in the day when browsers only opened 2 connections per domain; as browsers have
evolved, they now support more than 6 connections per domain. For those still forced to use IE 6, domain sharding is still an
excellent solution to performance issues. As with other techniques, HTTP/2 will eliminate the effectiveness of domain
sharding.
Minification complements compression, making text files even smaller by eliminating whitespace and comments, as these
arent needed by the browser to render a page. Minification can provide small improvements for already-compressed content
and greater compression for users who cant receive compressed content. With a number of tools out there to minify content,
the time requirements are relatively small, but the reliability is high. Implementing minification makes sense if you have
already optimized everything else on your site.

Figure 2: After minification


Revisiting the earlier concatenation example, we can see the benefits that minification can provide. While concatenation
made the performance worse, minifying the content reduces page load time by 12% from the origin and 16.8% from the
concatenated version, as the 600-Kb JS file is reduced to 428 Kb.

HIGH PERFORMANCE, HIGH RELIABILITY


This last category is where you want to focus your attention, as you get the most bang for your buck the greatest
performance gains and the lowest chance of breaking your application. Caching and compression are two very easy ways to
improve performance and can generally be configured easily at the server or application delivery controller (ADC). Caching
can also be extended a step further by using a content delivery network (CDN) to cache and serve content for geographicallydistributed locations.
Many FEO techniques focus on improving the performance of first-time visitors, as the hope is that on a repeat visit, content
will come from cache. The challenge is that not all caches are created equal.
Image optimization provides tremendous gains to web sites but requires a large time commitment to ensure that it is done
correctly. Image optimization includes both lossless and lossy optimizations, such as converting an image from one format to
another, stripping metadata, and reducing image quality. When done incorrectly, these can be highly unreliable and result in a
broken site. Taking the time to optimize images correctly is required to get the high performance impact and high reliability for
your application.

WHERE TO START?
Optimizing applications seems a lot like a science experiment. You form a hypothesis that doing x will improve performance.
You make the changes, realize something went wrong, investigate what went wrong, fix the problem, and test again. You may
or may not get the performance improvements you were hoping for, and if you had estimated the project would take a couple
of days and ended up taking weeks, was it worth it? If you are short on time, focus on the high-performance and highreliability items with short times to implement, like caching and compression. Or stay tuned for the next blog post on
how Instart Logic took a different approach to optimize application delivery by harnessing the power of machine
learning.

Visit our Blog for more

Вам также может понравиться