Академический Документы
Профессиональный Документы
Культура Документы
You cant have it all so the saying goes and when it comes to application optimization, that is typically true. There are
trade-offs that are made when deciding what optimization to deploy. Three of the main aspects when optimizing an
application to consider are:
1. What performance impact an optimization will have. Is the optimization going to help the majority of customers, the
majority of content, and will it cause a double-digit performance improvement?
2. How reliable the optimization is. Will implementing a new technique cause an application to break? Does the
optimization cause some pages to load faster and other pages to load slower, or have first-time visits improved but
repeat visits become slower?
3. How much time and effort is required for implementation. Will you have to spend time creating a whitelist or blacklist to
ensure the site continues to function for all users? What happens when you change your site? Do you have to
reconfigure all of the optimizations?
When it comes to optimization techniques, the list of what can be done is constantly growing, and not each technique has
the same ROI.
The grid above attempts to chart some of the most common optimizations available through FEO- and server-based along
three variables. The y-axis is the measurement of the performance impact, the x-axis is the measurement of the reliability of
the implementation, and the size of the dot reflects the amount of time required for implementation (the larger the dot, the
more time-consuming).
Concatenation reduced the number of round trips from 48 down to 21, which is a large reduction in the number of round trips.
The problem is that the JavaScript file is now 600 Kb and blocks other objects from downloading resulting in a higher
average page load time and a longer time to start render. Having the content split across multiple connections results in a
better experience for users.
Similarly, in-lining resources can also negatively impact caching. In-lining takes external files and embeds them in the HTML
to eliminate the round trip. HTML is typically not cached by a browser and is retrieved for each request, resulting in repeat
viewers getting worse performance after in-lining is implemented. Many people will choose to only in-line smaller files, but this
adds layers of complexity to determining the appropriate threshold and the creation of the whitelist.
With
increased
adoption
of
HTTP/2,
both
of
these
techniques
will
be
deprecated,
as
concurrency is a core feature of the HTTP/2 specification. Spending time to implement techniques that will not be needed in
an HTTP/2 world does not seem worth the effort, especially given the low performance impact.
Image sprites were one of the first optimizations on the scene. The goal was to reduce the number of round trips by
combining all the images into a single file and use CSS pointers to tell the browser which part of the image to display. Having
to create a new sprite and update the CSS when images change on your application reduces the reliability of images on your
site changing frequently.
Prefetching images is anticipating what a user will click on next and populating the browser's cache with that content. As
images are highly cacheable, they are frequently pre-fetched. Requesting images before a user needs them can speed up
performance if the prediction was correct. Predicting human behavior is not easy; you may end up sending content to the
user that wasnt needed, wasting precious bandwidth. It is difficult to reliably predict what a user will do next after viewing a
web page.
Lazy-loading of images is the opposite of pre-fetching content. Instead of loading images on the page, the browser defers
loading of the resource until a later time. This technique is used to first populate only the content that appears above the
fold images that are outside of the viewport are not loaded immediately. If a user navigates away from a page prior to the
image loading, this technique can also serve to reduce overall server load. Predicting what is in the viewport, though, is not
always successful, resulting in images that should load not being visible.
WHERE TO START?
Optimizing applications seems a lot like a science experiment. You form a hypothesis that doing x will improve performance.
You make the changes, realize something went wrong, investigate what went wrong, fix the problem, and test again. You may
or may not get the performance improvements you were hoping for, and if you had estimated the project would take a couple
of days and ended up taking weeks, was it worth it? If you are short on time, focus on the high-performance and highreliability items with short times to implement, like caching and compression. Or stay tuned for the next blog post on
how Instart Logic took a different approach to optimize application delivery by harnessing the power of machine
learning.