Google I/O 2010 – Optimize your site with Page Speed


>> Richard: Welcome to the tech talk. Thank you for coming over here, and today we’ll be talkingabout optimizing every bit of your site helping and Web pages using Page Speed. I’m Richard Rabbat. Im a project manager at Google. >> Bryan: And I’m Bryan McQuade. I’m a software engineer working on Web performance. >> Richard: Before we start, has anybody everused Page Speed? Can I establish, investigate a show of hands? Great, excellent. This is the link to the Google Wave.We encourage you to look at it. There are live documents being taken, as wellas you can placed moderator questions there, IO-speed[ inaudible ]. And what you’re gonna get from this talk soon. This is a one-on-one kind of talk. So we don’t presuppose a great deal of pre, past knowledgein terms of to improve understanding of Page Speed and action. So we cover part of the basics, but we’llalso go into some more details, more advanced details. We’re gonna mask a few things. Most importantly, why you should be here andwhy carry-on affects your site and why you should pay attention to performance. We’re gonna make sure you become familiarwith Page Speed and the, the brand-new is available in Page Speed. And we’ll likewise to speak for four new productfeatures, namely export functionality in Page Speed, DSDK and Apache module as well as PageSpeed as far as ads, analytics.We’re gonna spend some time looking, talkingabout Web performance. So I wanted to kind of give you, for thoseof the people that haven’t seen Page Speed, a brief UI basically, a knot of rules andhow a Web page has is being done against the standard rules. We’re gonna go over the details. But since we’re spending some time talkingabout Web performance, it’s good to, for you guys to see the commodity firstly. Web performance 101. And here is why should it matter, why should, should move are important for you. We know from a lot of user studies that speedis more parties goal your area, more beings coming back to your site.Last time at the Velocity conference that’srun by O’Reilly we were fortunate enough to have a number of fellowships actually sharesome of the data on their, on how execution actually, actually feigns traffic. So, in, in, in, in those rosters Google ranwhat, what we consider a 400 millisecond latency increase. So mostly we, we made a assortment of peoplethat we served more gradually by 400 milliseconds. And it actually, like corresponded to about0. 6% search abate, which is very substantial for a company such as Google.Yahoo did a similar venture. It actually punched their traffic by 5% to 9 %. Shopzilla went a little bit further. So what they did is they basically archetypedthe whole UI and it actually been participating in about five second latency weaken. And they got a 12% income increase. And not only that, it actually de, decreasedtheir OPEX expenses because they needed to use less hardware to do the acting. So important things that you are able to worryabout whenever you’re developing your network, website. So Bryan. >> Bryan: So , now that we’ve seen why hurrying, Web hastened, is important, I’ll do a little of a, more of a depth dive into the technicalaspects.Why don’t we start with the building blocksof Web performance? So there are three categories you need tobe thinking about when you’re thinking about Web performance, or the end-to-end picture. That is performance at the Web server, onthe network and in the customer on the browser as well. So on the server truly, the, the most importantthing or the only thing we really, we, we look at is that server processing time. How long does it make your server to generatethe response? So for a static source like a file storedon disk you’d expect that to be close to zero. But for a dynamic response, something in responseto a customer inquiry, you might see increased processing time. Now we’ll actually talk about, a little laterin the talk we’ll talk about some ways to mitigate the impact of that processing period. But that’s, that’s the main factors at theWeb server. And then on the network the two factors, theprimary factors, are bandwidth, or the significant contribution, bandwidth and round trip time.And then finally, we’ll dive into those alittle bit more on future moves as well. Finally, on the customer and the browser you’relooking at parse meter, so how efficient is the browser at parsing HTML. Resource fetch time, how efficient is thebrowser at conclusion and delivering sources. So we’ve seen a big improvement in efficienciesand browsers in the last 12 months in terms of resource discovery and resource fetching. Previously we, browsers had retrieved JavaScriptserially. So now most modern browsers, all modern browsersin fact in the past 12 months do latitude JavaScript retrieves which is, is a big win.And we’re continuing to see improvements there. And then finally the last two categories, layout and yield and JavaScript. And for most traditional Web pages on the, on the, out there, these categories don’t tend to, traditional being the pre-AJAX pages. These don’t have as large-scale an impact. But if you’ve got a large DOM or a complexDOM, layout can actually be a significant time, age give. And then JavaScript, again if, if you’re usinga JavaScript heavy AJAX page, that’s an, potentially an important contributor as well. So in those latter two cases there’s actuallyanother tool, and hopefully you got to attend the tech talk’s Google Speed Tracer. That does a nice job at giving you a timelineand teaching down into the specifics of time spent in layout and rendering, and time spentexecuting and parsing JavaScript. And I’d recommend checking that out to GoogleChrome extension. That does a nice job. So now that we’ve looked at the building blocks, why don’t we look at an example page load and sort of see how those building blockscome together to, over the lifetime of the page load.So we’ll look at a sheet load for a Googlesearch request, a rummage query for Half Dome photos. And what I’ll show is we’ve got sort of threecolumns now. Client and server. These are enterprises that happen either onthe client or the server. And then the third column we’ve got the rendercolumn. This is what the sheet looks like as a resultof these different steps along the course of the page load. So what we’ve done here is we’ve really sloweddown the page load.And, and we’ll identify the discrete paces thatwe go through, then in turn what that was like in the browser as a result. So we’ll understand how all these buildingblocks come together to actually display the page for the user. So firstly, the first thing the browser hasto do every time you steer to a Web page is potentially perform a DNS lookup. Subsequently, formerly the DNS lookup ends, and that’s, that takes about a round trip time, DNS lookup. And in fact, in many cases it’ll do longerthan a round trip time because you’ll punched numerou DNS caches along the way.But roughly speaking you’re looking at oneround trip time. TCP connection, connect to the server, another round trip time. And then finally, after those two round triptimes the client will send a HTTP request to the server, so asking for that specificresource. The server begins to process that query andwill start sending back the response. And at this site we’ve seen three round triptimes pass. So round trip time varies considerably dependingon where you are, how well connected you are on the Internet. But you’re looking at anywhere from singledigit milliseconds on a neighbourhood LAN to 10, 40, I recollect the average is about 70 milliseconds, up to hundreds of milliseconds or even a few seconds in the worst cases.So minimizing round trip times is a reallyimportant part of optimizing your website. So eventually formerly the response comes back afterthese three round trip times, the browser can begin to parse that content. And then we start to see the page renderingon the screen. And subsequently, more of the content comesback. The browser continues to parse. And in this case the browser’s discoveredthat there are four epitome sources embedded in the response. And so it begins to fetch those resources. And what we see is that the network just echoesand begins fetching these. Each of those sub fetches is potentially goingto incur a DNS lookup, a TCP connection as well. So you’re seeing additional latency thereas well. And then eventually we see these responsesstart to come back. And I’ll, I’ll mention too, the gray-headed sectionis the sort of off screen area, segment of the page.So we’re looking at the — the top portionis the part the user can actually visualize. What we’re seeing here is that the page isrendering that most important content, the subscribers perceptible material first. And then the amber parts are the current, the repainted fields during that last iteration of the load. So what we start to see is the image responsescome back, they continue to fill in. And eventually, the page finishes interpreting. So this is sort of how these different factors, DNS, TCP, purchaser cypings, parsed layout, sub reserve delivers come together during lifecycleto sheet laden to, to kind of load and render that page.So in fact[ pause] Frequently when you’re performing a Googlesearch, it may seem like it loads, hopefully it feels like it loads like that[ snaps fingers ]. But in fact all these little discrete stepsare happening along the way. And understanding those, understanding howthey come together can help to understand how to optimize the sheet. So considering the fact that, Richard will summarize. >> Richard: So, if you go away from this techtalk, and you need to remember in fact three, three things out of this. These are like the three accelerate guidelinesyou should like always “re particularly concerned about” when, whenever you are developing a Web app.And the first one is you want to try to servefewer bytes. And you’re, you’re going over the network. You want to try to minimize the number ofbytes that you’re sending over the network because they, they fit in packets. There are so many round trips. So the style, some of the ways that we suggestthat you do it is by squeezing and sufficing, enable user compression. Obviously lots of people do it; some juststill don’t. If you have a Web host, Web hoster that ishosting your, your content and is not enabling compression move to another one. Optimize mars. A pile of the images that come out of, of acamera are very wordy and verbose. There’s like a lot of meta information that’sun, redundant. Be disposed of it with a lot of open generator implements; you can see a cluster of them.And also make sure that you’re only sendingthe freedom immensity resolutions tabled in the image. It saves bytes on the cable, but it also savesprocessing time on the server, on the client side. Get rid of all the content in the HTML, inthe JavaScript in the, in the mode expanse that is something that you’ve introduce for developmentsake. All specific comments are things that your browserdoesn’t care about. So get rid of them. Use minification tools such as Closure Compiler, which is also an open beginning activity. And likewise, cache aggressively. The behavior I think of it is the best, the fastestserving is when you don’t have to serve. That everything is in your cache. So see if you can push things earlier to thebrowser that is gonna be, is gonna be needed a little bit later so you’re not waiting, the user’s not waiting.So, act fewer bytes. Parallelize resource downloads. Modern browsers use up to 60 similarity attachments. Try to made of them all. And we’ll talk a bit of like oneof the rules about like optimizing autostarts and scripts, which likewise helps in parallelism. And don’t shy away from promoting modern browsers. Don’t develop for the lowest common denominator. It doesn’t help. Push the envelope. If you need to support older browsers, checkthe user agent and perform unoptimized material for that, for that user agent.For example, don’t perform sprites to old-fashioned browsersthat don’t patronize it. But use spriting, portrait spriting, when you’re, the user agent can support it. So three things: provide fewer bytes, parallelizeand push the envelope in terms of browser support. So I know it’s been a few minutes since we’vestarted this talk, and beings are anxious to see it. So Page Speed is, is a, it’s a Firefox, Firebugextension. And we have about one million active consumers. So for the person or persons that haven’t applied it, downloadit and affiliate the merriment.[ breather] This is our site, code.google.com/ speed/ page-speed. And the highway you’re gonna utilization it is — thisis the little Firebug. And you, “youre starting” it up. Page Speed is a continuation in Firebug. And it tells you about like the new featuresthat we have. And the first thing you’re gonna do is analyzethe performance of that page. So I’m analyzing this sheet on the system website, and it gives me a assortment of rules.The first thing you see is a orchestrate. A rating is a, is something that we believeis good indication for, it’s a good metric that you can use that, that’s reliably reproducible. So you’re getting about 82 over a hundred. We think it, it’s okay. It’s a, it’s an okay Web page. And it’s gonna, you have a bunch of rulesthat you executed, and each one is gonna tell you what, what the issue is. So for example here, leveraging browser caching. So all these, a great deal of this JavaScript is, has an expiry time of like one hour.You should look at the expiry time. Do you really need it to be one hour, or canyou propagandize it? Can you like, are you able set it at 24 hours orseven, seven days? Seven eras is a good time. You’ll, you’ll make sure that anybody thatcomes back to your area are truly, can actually have it in its cache, in the cacheof the browser. And certainly not, a number of rules here. I encourage you to explore them. And I promote “youre going to” likewise like look at someof the documentation. So the easiest way to get to the documentationis just to press on the rule.’Cause once you ensure the standard rules and go like I don’tunderstand what this rule is, precisely press on the rule and you have a lot of documentation. All this, all the documentation in open source. And we, we try to be very illustrative of what, what the problem is and how you can resolve it. So going back to, going back to our presentation. Bryan? >> Bryan: Yep. So let’s look at one exampleof, of why fasted sentiment proliferation matters.So for, for each of those Page Speed suggestions, why is it important that you endorse that, that suggestion? You apply it to your site. What, what is it doing and how is it makingthe area faster? So we’ll look at one specific pattern, whichwe talked a little bit about earlier around parallelization. So the rank of forms and scripts. So here’s an example I had of an HTML Webpage. What we’ve got is sort of some interspersedCSS and JavaScript content. It inspects reasonable enough. But in fact, in some browsers what you’llsee is that intermixing CSS and JavaScript like this, so some CSS, some JavaScripts andCSS inserts added serialization interruptions in the page onu. So what, what you get is you get the two, the CSS and the JavaScript file will laden. Youl get another delay on the next JavaScriptfile and then the final CSS enters load.And it turns out, so, so what you’re lookingat here is approximately 300 milliseconds in this example if it’s a hundred millisecond roundtrip. So it is about to change that if you exactly reorder thesethings, so you put all the CSS upfront followed by the JavaScript, the browser can more efficiently, some browsers do regardless, more effectively will deliver that content and you’ll be ableto remove one of the round trip times. So this is an example where it’s an easy fixing, an easy thing to do. All of our suggestions, this one and all ofour suggestions, won’t have any, shouldn’t have any impact on the inspection and feel of yourpage. So as far as the user’s concerned the pageis exactly the same. And then finally, what you get is you reduceone of those round trip times and lead from the 300 milliseconds to 200 milliseconds withoutany other change in the page.So over the last, we launched in June? >> Richard: Yep. >> Bryan: So it’s been about a year. And over the last year we’ve been workinghard on a number of things. We’ve added some new the regulations and cooked someothers. And we just wanted to talk about a few ofthem only to give some examples. So we lent the standard rules, announced understate requestsize, within the last year. And the idea there is that each request thatthe browser does has some overhead. And there are things you can do. Reducing cookie size, reduce the number of lengthof the URL in fact can save that entreaty size small-minded so that it fits within a single TCPpacket and is more likely to be transmitted efficiently and quickly over the network. And that’s especially important in mobilebecause mobile tends to have a high latency and asymmetric bandwidth where you’ve gota slower up than a down connect. So, next specify a cache validator is a newrule that we added actually fairly recently.And the idea there is that for static material, for static reserves, once they do expire, so if you adjusted an expiration of a week or amonth or a year, formerly they do expire it’s possible for the browser to ask the server, “Hey, I have natural resources. It’s not fresh anymore. It’s not, it’s not, it’s expired.Has it deepened? ” And the server can say, “Nope, it hasn’t changed.” You can update it and keep it for anotherweek for instance. Using a cache validator can be used to do that, otherwise you have to download the entire asset again even if it hasn’t changed.So that’s a rule we’ve added. Specify a person aim early. It turns out that if you, if you’re servingHTML content and you don’t specify the character set, so UTF-8 or Shift-JIS or whatever itmight be, the browser has to guess as to what the character set is. And in order to be allowed to to do that, it buffers contentin its, in, in remembrance before it actually starts parsing it. So the browser’s downloading the contents. It’s being performed from the, the server asquickly as possible. But the user’s not look anything on thescreen until it finishes buffering, analyzes all that material to guess the character set, which it could possibly guess wrong, and simply at that point to start rendering content.So precisely specified under your courage initiate andHTTP response headers, material form verse html; charset= whatever it might be. It allows the browser to more efficientlyparse and yield the content as it comes, as it arrives on the wire. And then finally, minimise DNS lookups isa rule that we implemented initially based on analyzing some pages internally, a longtime ago in fact. And which is something we noticed is that for certain sitesand for certain content, third party content, it tended to flag those resources and sortof say you should, it would flag resources that we, we felt we probably shouldn’t beflagging. So we invested some time and just recently lookedat the algorithm and adjusted that algorithm so that it, it doesn’t signal, it’s basicallymore accurate and makes more accurate recommendations.So actually we just released Page Speed 1.8, which has a new implementation to minimize DNS lookups that is more accurate and lesslikely to give you sort of incorrect suggestions. So we’re constantly aria these rules. We’re forever computing brand-new regulations, both aswe discovery controversies either ourselves or from the information received from customers. So we’ll send you a relation of the Page Speeddiscussion meeting at the end of the talk. And then, right. Yes. So, go ahead Richard. >> Richard: So, a bunch of new facets. And today we’ll talk a bit of the, the export fun, functionality. It’s mostly a beacon that you can send.And I’ll just go through the demo immediately. So let’s go back here. And, so we have, of course[ giggles ]. >> Bryan: Right. >> Richard: So we have exportation functionalitythat will allow us to export >> Bryan: To our reload.[ delay] >> Richard: Yeah. That’s fine. >> Bryan: Or permutation browsers. >> Richard: Yeah. Yeah, it doesn’t want to start. >> Bryan: Switch to Safari.Like I said, we’re always discover and fixingproblems, so >> Richard: Yeah. So mostly “were having” two export functionalities. And one we, we send the data back in JSONformat. And we also send the scores to www.showslow.com. If you crave, if you guys want to try it ifyou have Page Speed operating, merely communicate it out. You’re gonna have a bit of a law disclaimer.We worked with this outside independent developerwho maintains showslow.com. And mostly, it gives people a way of keepingtrack of your Page Speed rating across, across occasion. And in this case I’m showing an example ofI believe Google.com and YouTube.com and gmail.com and measurements that we’re sending to ShowSlow. So when you’re doing the improvement you changesome of your, you, you adapt to the Web page to be more performant and you can track theperformance of your page across, across go. So I feed you to use this. It’s a great functionality. Don’t pop showslow.com with too many, toomany lighthouses at the same time.[ delay] >> Bryan: Do you want to show the area? >> Richard: Sure. >> Bryan: Do you want to go look at the >> Richard: Oh yeah. So the, the actual site here is, here’s thesite.YSlow is a competitor to Page Speed. We support you to use as many performancetools as accessible as you are eligible to, you can try out. And in this case what we, which is something we did earlieris we communicate a cluster of askings, too much of beacons to showslow.com and recoursed themright here so you can keep track of them. And you can, these are the analogies. So Google.com and YouTube.com are here. And you can see over term the, the performanceof your page certainly. Okay, so let’s go to the next feature. Bryan? >> Bryan: So, so one of the things we’ve beenworking on over the past year is the Page Speed SDK.So[ delay] At the time of writing of our initial propel. Page Speedwas entirely a JavaScript implementation. It was pretty, it was tightly coupled to FirefoxAPIs. And which is something we found was that we wanted to reusethe Page Speed logic in other environments. So one field, one blot, one arrange early onthat we said we’d like to provide this is in Google Webmaster Tools. How numerous, has any, how many are familiar withGoogle Webmaster Tools? Great. So hopefully, maybe you’ve seen that thereare Page Speed suggestions actually in the Webmaster Tools UI, in the lab section.And if you haven’t applied Webmaster Tools before, I is certainly urge you to check it out. It’s a, a great resource with lots of goodhelpful message for your website, assuming you have a website you can sign up and, andlearn about your site on that site. So which is something we did was we, over go, over thelast nine months, we’ve been porting regulations from the JavaScripts gap to a sort of browserindependent library we’ve being used in C ++, that we’re able to reuse in Page Speed forFirefox, in Webmaster Tools and in other environments as well. So you can now download that SDK, use it.We’ve got a build set up for Linux and for Windows. And if you want to build on Mac, I don’t thinkit’ll take much work. So we, if you figure, if you figure out whatsmall changes to meet to the Mac’s file or you get that to work we’d clearly, feelfree to share that with us and we’d be more than happy to include it in our open sourcerepository.So I mentioned Webmaster Tools. Do you wantto >> Richard: Yeah. >> Bryan: showing that. This is one of the places where Page Speedis available today. And so here’s, here’s an example of WebmasterTools. You can sort of find, right, that this is theYouTube area of Webmaster Tools, which we, we’re able to see. And it gives people some instance feedback forsome sheets on your area. So you can drill down and for example seethat these four rules have specific suggestions to help you aria and optimize the site. Now for example is real combine externalJavaScript, which you can learn more about in our documentation Richard indicated earlier. But the idea is that if you combine thesetwo riches the browser will be able to load the page more efficiently, at least insome browsers. So now not only do you have access to PageSpeed suggestions in the Firefox tool, you can just go to this website, Webmaster Tools, Google Webmaster Tools, and get this information without having to install an extension, withouthaving to run it live on your locate. This data’s just provided for you as partof the Google Webmaster Tools service.Do you just wanted to[ inaudible ].[ pause] So in addition we’ve actually, so we’ve workedwith a duo other tools as well. This is the Page Speed for Firefox UI. PageSpeed for Firefox is now driven off of the Page Speed SDK as well. Gomez, a Web performance companionship we’ve beenworking with, also integrated the Page Speed SDK rule determined and they’re providing that intheir implements. This is a pre-release. They haven’t actually launched this yet, butthis is something that’ll be coming soon. And then[ coughings] Steve Souders, excuseme, Steve Souders improved a nice Web page where you can take a HAR file, a HAR file’s an HTTParchive file, kind of a new JSON format that lets you capture all the information abouta page load, so all the resource content, headers, timing information.You post that into this Web page. It uses the Page Speed SDK and comes backand gives you a Page Speed rating. So these are just a few deployments. We propelled the SDK about a month ago. And we’ve seen great uptake. And we, well, let’s do a, a little bit ofa deep dive into the SDK now[ clears throat] to see how you might use it.So if you wanted to use the Page Speed SDKit’s, it’s pretty straightforward. You just need to choose an yield formatter, sort of how do you want to present the results. Do you miss just plain text, HTML, JSON, etcetera? Pick the Page Speed rules you’d like to run. Specify a source of your input data. So for example, HAR or some other input source.And then just cite the engine. So let’s look at a snippet of system that doesthat now.[ suspension] >> Bryan: So we’re choosing to use a textbook format, or just something that will print to standard out in such cases. And we’re populate the core govern situate, thecore Page Speed principle designated, so the rules that you’re familiar with in appropriate tools. We’re going to use a HAR file as our input. So this is an example HAR. HAR is a JSON format. The dot, dot, scatter would be a big blob of contentthat contains all the resource bodies and other things.And then finally we’ll invoke the Page Speedengine. So pass at the rules, initialize, computeand format develops. And at this level the research results will be printedto the standard out on the, on the, on the console. So let’s look at that actually. So one of the tools bundled with the PageSpeed SDK is called HAR to Page Speed, which is actually the tool that supremacies the HAR toPage Speed website that Steve built.And you mention it, you, virtually the codewe just looked at is the core, the nerves, of that tool. It’s got the ability to read a enter from acommand way disagreement. But beyond that, I convey, it’s essentiallywhat we just look back. And so you run it like this, quite simple, right. Now you’re not in a browser anymore. You’re on the require argument, different, differentenvironment. And you’re able to get that same knowledge, those same answers, here on the console easily and rapidly. And so potentially you could write an automationtool that you use something like this to automatically analyze HAR files over hour, all without havingto stand up a browser and lead Page Speed and that sort of thing. So we can learn solely what we can doto speed up the Web page. So that’s the Page Speed SDK. Now let’s look at another deployment of PageSpeed technology that we’ve been working on.It’s sort of, it’s very early in the lifecycle, the project. But which is something we decided we wanted to do is tryto shift as much as possible from telling Web developers what they can do to speed uptheir website, and time actually try to do that for them. So basically, so what, what we decided todo was implement an Apache module that en, encapsulates a good deal of these Page Speed suggestions. So all you have to do is install this moduleon your Apache server and then you don’t have to think about the problem anymore ideally. We simply automatically optimize the personas, the HTML, the CSS, the JavaScript, integrate assets, diversify caching periods usinga skill announced rich fingerprinting, which is talked about in our documentationas well. All these the picture is captured automaticallyso you don’t have to go to the perturb of implementing them, or Web content hostersdon’t have to do that, and they can just sort of have this applied automatically.So this project is open source as well. Like I said, it’s early in the developmentcycle so it’s not ready for use more. But if you’re interested, take a look at ourcode.google.com repository. >>:[ Inaudible] does it slip semicolons? >> Richard: So the question is >> Bryan: Does it position semicolons at theend of ways? >>: Yes. >> Bryan: I don’t know actually. I want to say, so the >>:[ Inaudible ]. Does it preserve the newlines, or what? >> Bryan: I actually don’t think it preservesthe new wires, so it would need to insert semicolons. So, so that’s actually a good example of acase where >> Richard: If you can repeat the question, because >> Bryan: Yeah. So, so he asked if it retains semicolons atthe end of brand-new strands.’ Cause one of the things JavaScript minifiersdo is they tend to remove brand-new, new texts. JavaScript new fronts implicitly add a semicolon. So if you simply combine the two wires you canend up with JavaScript that disintegrates, I want to say we do fix that, but I’d have to doublecheck.In any case, if, if you run into a problemdo, you can actually go to the same URL and register an issue or announce it on our discussionforum. And we’d be happy, we’re always happy to acceptcode patches if you’re interested in, in defer spots or we’ll try to fix the question ourselvesfor a precede freeing. So here’s an example of as the HTML flowsthrough Apache, sort of coming in unoptimized like this perhaps, we’ll parse that HTML andperform some optimizations. And what you end up with is HTML that’s alittle more minified and it’s serving fewer resources.So what we’ve done here is we’ve combinedthe two CSS riches, we’ve compounded the two JavaScript assets. What you can’t see here is we would have alsoextended caching periods and removed unnecessary grey cavity along the way. >> Richard: So, before you move on why don’tyou talk a little bit about the increase of caching lifetime because it’s quite interesting. >> Bryan: Oh sure. So, one thing that we’ll often find is thatcaching periods are either unspecified or set sort of not unusually aggressively, on somesites anyway. And makes are sometimes concerned thatwell, if I increase it for a week or a year, what if I need to change that resource? And so what we recommend is a technique wecall fingerprinting, URL fingerprinting essentially, which looks at the actual content of the resourceand embeds that fingerprint in the URL. So what you’re looking at here is/ cache/ someblob. That builds no ability, right,. css. And what that actually is an, part of an md5sumof the concatenation of a.b, a.css and b.css. So now because we’ve sort of captivated a fingerprintof the actual contents in the URL, we can use a really vigorous caching lifetime.We can give this thing to not expire for ayear. And then if it does happen to change, wellthe contents will change, the fingerprint will change and in turn the URL will change, right. So the browser will know, “Oh, I have to gofetch this other rich which has a different URL that’s not in my cache.” So this causes you kind of, instead of specifyinghow long the browser, you mostly can expire the resource when it expires. Instead of having to wait for that expirationtime you really change the URL and the browser will download it as soon as the URL changes. So that’s a technique we’ll do. It, it’s a little of a fragile technique.You have to sort of match up the content signatureswith actually URLs in the contents, which you can do by hand or if you use them mod PageSpeed, we’ll do that for you automatically. So that’s mod Page Speed. >> Richard: So, so Page Speed came out of Googlelabs. We, we invested a lot of time trying to understandoptimization of UI. At Google we, we built a great deal of these rulesinternally. And after we, we liberated it as open sourceabout a year ago just like Bryan said, we got a lot of good feedback. And one of the, one of the most importantpieces of feedback is a website is not generally coming from one, one dimension. There’s, there’s content that, for examplefor a publisher there’s a material, there’s content that the reporter’s writing, there’sthe ad methods that are, the hell is, that you’re shipping so that you can monetize your, your, your, your sheets. And there’s also tracking analytics that youneed to, so you can keep track of measurements and all the metrics that you care about.So we invested a lot of time trying to understandhow to adjust this. And our approach is, is to try to give asmuch information back to the developer as possible. And to do that we basically started focusingas, as a first step in terms of like third party content on ads and trackers. And I will show the demo.[ pause] >> Richard: So this is the YouTube page. And I’m gonna start Firebug. This is Page Speed. And there is a filter option with these, theseoptions. The first one is analyze ads only, analyzetrackers exclusively, material only and the ended sheet. The terminated page is what you’re was just about to whenyou run Page Speed for tho, for those of you that have run it.And now what we’re going to do is we’re goingto filter merely ads and try to, and try to see what the, what we’re gonna get in termsof that. So Im going to, first going to analyzethe performance for the complete page. I get a bunch of rules. Undoubtedly there, there’s a lot of recommendationsfor just about every regulate. And then I’m gonna analyze the ads merely. All right, refresh the analysis. You can see like all these rules are not applicableanymore. And what we’re, what we’re looking at is specificallythe ad content. And so we have a number of filters for, forwhat we think are ads. And by the way all the, all the filters areopen informant. So if you have suggestions for computing more. We is today we’re not particularly, we don’t havea lot of coverage internationally. So it will be good to have more internationalcoverage for ads, plans, ads that we don’t capture today.And you’ll meet like double-click is beingserved on YouTube, but undoubtedly it’s a, it’s a, it’s an ad. And we’re going to give you recommendationsabout this ad. The same thing happens for analytics, althoughI don’t believe that YouTube has analytics on their, on their pages. So what this will give you is enough informationto understand how third party content is affecting its implementation of the your sheets. We’re going to extend it to gadgets. We accept gadgets are becoming a big, a bigpart of every Web page and we, we think it’s important for every Web developer to actuallyunderstand the impact of all the content that they have, and understand the impact of thingsthat they can, they have control over versus things they don’t have see over, and makethe liberty decision.And every, every improvement is a, is a, isa balance between including more features and for thinking about speed as a feature. And we’re sure, we’re sure you can find thebalance there. With, with this we hope that throw all thisinformation will, will likewise stimulation third party content to actually try to make sure thatthey are optimum so that when they’re acted out of your Web pages they, they are fastand performant. >> Bryan: And this is a feature we just addedin >> Richard: So, we are only released it this morning. >> Bryan: Yeah. >> Richard: We just propagandized it out, it’s inbeta, ten percent of >> Bryan: Yeah. >> Richard: all of our users are gettingit >> Bryan:[ Inaudible ].>> Richard: as of this morning. >> Bryan: You may get it automatically if youhave Page Speed installed, and if not you can go to Page Speed download sheet and downloadthe Page Speed beta and you’ll get that piece as part of that download. >> Richard: And, so it’s a new aspect. So utter us feedback online on the discussionlist, it would be great. And undoubtedly I only dealt this.[ delay] Future work. >> Bryan: Yep. So would like to congratulate, I’ll talk about a fewof the rules that we’re thinking about adding to the rule change over the coming months.I’ll talk about three governs. The first is to recommend abusing chunked encoding. So chunked encoding is a technique that allowsyou to send a page in pieces as opposed to sending the whole thing after making thewhole thing. And this actually relates to that bit I talkedabout at the beginning where server latency, what do we call it, server treating timecan add to the page load. Often period if you use chunked encoding, you’llmitigate or even eliminate that, that, as far as the user’s concerned. What it lets you do is essentially send the, so the expectation is that most pages that are dynamic, search results sheets, used customizedpages like email websites, etcetera, have sort of a static part of content at the headof the sheet that’s, doesn’t take anything to compute it.It’s essentially merely a static cord. And the idea is that you transport that as thefirst hunk of the response. While you’re doing that you start computingthe actual dynamic data the user sought. And what you have is in parallel you’re sendingthat static data relating to the cable while you’re computing the user’s make. And then as soon as that dynamic content isgenerated and ready to serve, you act it right behind as a separate chunk. And as far as the, depending on the user’sconnection, there is an opportunity exactly looks just like a, a, a, a consistent series of data that actuallywas never interrupted. So chunked encoding, so I should say the defaultbehavior in HTTP is to specify the span through response in the response headers. The response headers come before the entireresponse figure. So by default, you have to wait and bufferthe part response, the entire dynamic response, before you start sending any of it.So chunked encoding makes you do this in chunks. Send that static header firstly and the, thedynamic figure afterwards. And what we see is that this has been a bigwin for Google properties like inquiry and docket that have fit that restriction ofdynamic response with a static header. And so, bawls, so what we’ll experience oftentimesis that before implementing this kind of technique you’ll have this HTTP waterfall chart thatshows the timeline of the resources being downloaded. It ogles something like this. HTML you’ll go, you’ll invest a lot of timedownloading that HTML resource. Towards the end of that download you’ll startdownloading the sub riches declared in that content.And once you enable chunking what you getis, so one nice side effect of, of this is that external JavaScript and CSS are oftentimesdeclared in that static dollop. So by transport that static dollop much sooneryou pull in those sub resource retrieves cons, significantly and allow the browser to startdownloading, parsing and requesting those resources much more quickly in the sheet onu. So this is a beneficial technique for dynamicresponses. So second, downplay the dimensions of the early loadedresources. And the idea here is that browsers have becomemuch more efficient at downloading sources, precisely JavaScript.A year ago, most browsers out there woulddownload JavaScript serially. They’d sort of, if you had ten JavaScriptfiles declared in a row it would download one, wait for it to finish, parse and executeit, move on, dowload the next one. And you saw this stair step in the waterfallchart. So what we’re seeing now with the modern browsers, all the modern browsers, all the major browser vendors have implemented this, is that youget parallelized JavaScript fetches much more efficient use of the network.But regardless, the browser can’t present anythingtil, to the user until all of those resources ought to have downloaded and all of the JavaScripthas been parsed and executed, CSS as well. So the less you serve up front the less youserve in the head of the sheet, and the more you can defer to later in the page until aftercontent has been made. The faster that initial flare of the information contained, that initial sort of time to firstly paint is in accordance with the fast, the less period the user’s sortof sitting there staring, waiting at a blank screen.So you can actually accomplish this techniquetoday exerting two of our rules in the Page Speed regulate set in Firefox. Remove unused CSS and shelve loading of JavaScript, which will help you to understand which JavaScript and CSS are actually used on your sheet andwhich, which are, are not until later. We’re gonna sort of, what the present rule will dois it will streamline that process a little to make it a little easier to apply thetechnique. And then finally, minimi, belittle fetchesfrom JavaScript. So as browsers have become more efficientin the last 12 months and they’ve parallelized JavaScript fetches. What we’ve seen is that JavaScript that’sfetched expending JavaScript still get serialized.So we pay fines and penalties for fetching JavaScript, from JavaScript. So, so sometimes JavaScript libraries, you’llsee this in a good deal of actually major websites, will do something like this. Very straightforward, write a couple scripttags. And then they’ll give some JavaScript libraryto load a couple of these JavaScript riches. It looks pretty tolerable. But what this does in the modern browsers, traditionally this actually had no latency impact. A year ago, most browsers it didn’t make adifference because it was just gonna be fetched serially whether you delivered it squandering JavaScriptor whether you delivered it exploiting a script label. And so what happens here is that the browseruses sort of a, a speculative fetcher. So it runs and parses ahead of the renderer, looks for calls and says, “Oh, I discovered a write call. Okay foo.js, I’ll fetch that”.Parses onward, parses onward. Strikes a write tag. It says, “Well I’m just a speculative fetcher, I don’t actually execute JavaScript. So I can’t do anything about this one. Skip it.” Then eventually the renderer receives foo.jss, js, common.js and effects.js. Parses and performs those and says, “Okay, next I’ll execute that script block, ” because it executes dialogues in order to be allowed to. And what this ends up looking like is thatstair stepping action that you encountered in older browsers, the serialized JavaScript delivers. So if you’ve got JavaScript retrieved in thisway on your sheet, and it’s easy to really turn them into script calls, you’ll drive from thatserialized JavaScript fetching to parallel JavaScript fetching, constructing your page, displayits contents sooner than it otherwise would.So then finally, I’ll make Richard talk. >> Richard: So a great deal of the development forPage Speed happened early on when we didn’t have Chrome, we didn’t have Developer Tools. And the last year we’ve been focused a loton the rules and the, the correctness of the rules, and build that, that, the SDK. We’re going to be secrete a account of PageSpeed for Chrome with integrated with Chrome Developer Tools. And we’re hoping to get it at the end, bythe end of this year. We know it’s a, it’s a, it’s a very high requestby everybody. We apologize for like not having been ableto do it earlier. But Developer Tools are now such a completedeveloper environment for us that we, we’re going to be bring in the, acre in Chromethis year. So where can you get more information? A cluster of places. So we have a unusually developed website thanksto our wonderful tech scribe. So everything is at code.google.com/ acceleration/ page-speed. We, we have — all our occurrence is an opensource. We don’t do any developments in the sandboxsomewhere.Do contribute, if you’d like to contributethere’s also, bug, just asking for features and bugs is great. And it’s a quite active mailing list thatyou can subscribe to and help us represent the make better. And tell us about success stories expending PageSpeed or problems of how you use Page Speed and how it didn’t act so we can make itbetter. Thank you. So let’s see if we have anything on, on moderatorto cover. >> Bryan: And if you have questions. >> Richard: And if you have questions, pleasethe microphone is there.[ Audience clapping] >> Richard: Thank you ..

As found on YouTube

What's your reaction?

In Love
Not Sure

You may also like