Uncategorized

Google I/O 2012 – Optimizing Your Google App Engine App

caption

MARZIA NICCOLAI: Hi, welcometo Optimizing your App Engine App. I’m Marzia Niccolai. GREG DARKE: I’m Greg Darke. TROY TRIMBLE: And I’mTroy Trimble. MARZIA NICCOLAI: Andwe’re going to be talking to you today. Here’s a little overviewof the agenda that we’ll be covering. First, we’re going to go intosome coding tips on how to write your App Engine appmore efficiently.Then we’re going to cover someperformance settings. And then we’re also going totalk a little bit about what we’re planning in the future andhow this will impact the way you scale andwrite your app. First, I just want to talk alittle bit about latency versus cost, which is reallywhat we’re trying to cover here when we mean optimizing. Now, these aren’t alwaysconflicting goals. Sometimes these things areconflicting and sometimes when you optimize for one, youoptimize for the other. My general rule of thumb whenI think about these things are, if I make something moreefficient, that’s generally also going to lower its cost.If I’m using more resources toaccomplish lower latency, that’s usually going to behigher costs for me. And sometimes with App Engineand the way we have our performance settings available,you can choose to lower the amount of resourcesyou’re using. It’s going to be less cost,but it’s also going to be slightly slower latency. Now, the other questionis, what do we mean by good latency? So I’ve put up a slide here froman admin console graph on an application that I work on,and I’ve highlighted your lowest latency handler and ourhighest latency handler. Now, for the base handler, 46milliseconds, I’m just going to say I think that’spretty good.It’s what your user’s going tohit the first time they come to your app. 46 milliseconds, they getsomething served to them. Now. The highest latency handler,you’re going to ask 942 milliseconds, basically,1 second. Is that good or that bad? Well, I mean, a lotof that depends. And a lot of that really coversthe heart of what we’re talking about today,which is, well, who’s seeing that request? If your user’s seeing thatrequest, is a second good? But if it’s just some requestsrunning in the backend, maybe I don’t care if it’s942 milliseconds.Of course, in this case,it’s something that’s asynchronously requestedbut served to my user. So I see that 942milliseconds. I’m saying, well,it’s not bad. But could I be doingsomething better? I probably could. Apparently, I have to bestanding here to get the thing to work. OK, so I just want to say thatwe’re using Python here for our code examples. But really, a lot of thesethings are language agnostic. When we do say something that’slanguage specific, we’ll call it out.And even the Python, pleasedon’t copy and paste this Python in your app because it’smore as an example of good and bad things to do, notreally production-ready code. So the first thing, what I’mgoing to talk about, is Datastore tips. And I’m going to structure this,first, things that we’ve actually seen, thingsnot to do. And then some general tipsof things to do. Now, the first thing I getasked are, what are some general strategies when I’mactually writing my models for the Datastore? And what I always try to tellpeople is think about the pages that you’re serving tothe users and try to design your models so that those pagesare served efficiently. And, of course, pages that yourarely see, you shouldn’t spend as much time optimizingas pages that you see a lot.Because those are going to bethe bulk of your requests and the bulk of your costs. And this isn’t really related tooptimizing your app per se, but everyone wantedme to plug. Make sure you put your appon the higher application Datastore or migrate your appbecause now you can’t really create apps on the Master/SlaveDatastore. But this is going to be the mostimportant of all because you want your app to be reliableabove everything else, and the HRD Datastorewill help you do that.So I have some specific examplesof things I’m going to go over. And I’ll just diveright into those. This is my first Datastoreanti-pattern. It generally looks somethinglike this. You have a query and you’refetching one result. And basically, what this code’ssupposed to say is I know what model I want toretrieve to the Datastore. So I have to ask myself, whyam I going to query for it? Instead, we have somethingcalled a Datastore get. And here I have a get by keyname, which is something I can set when I create the model. There’s also Datastore IDs ifyou don’t set a key name. If you know what the modelyou’re expecting is, you really want to try to writeyour app so you can use as many gets as possible. And the next slide reallyshows the reason. If you look at what happenedwhen I did a query, well, I did two Datastore read ops, andit took me 59 milliseconds because I had to scan throughsome indexes.After, 1 Datastore fetchop and 4 milliseconds. This is really a no-brainer, 50%less and 14 times faster. So this is what I mean, costand efficiency not always contradictory. Of course, for Python, reallythis is something that’s language-specific. Don’t use fetch at all. Fetch actually calls anotherfunction called run and then puts it in a list. So you can just cutout run directly, which returns a generator. And that’s going to be lessmemory for your application. And anecdotally,no guarantees. It’s about 10% to 15% fasterbecause it uses an asynchronous prefetch, which issomething that Greg’s going to cover a little bit later,asynchronicity. But this is something to callout specifically for Python. Now, my next Datastoreanti-design pattern is something that you may not beusing in your app today because it’s somethingwe just– I think it was two or threereleases ago, something called projection queries.Projection queries are reallygood if you’re reading data only and it’s a small entity. This first thing is what an oldquery would look like on App Engine, a model where andthen the filter thing. Now, the second one looks veryfamiliar to those of us who love SQL, which is I’m actuallyjust selecting the fields I actuallywant to read. This again, was something notsupported until recently. But now that it is, ifyou just want the newest update of Firefox– I think we can wait. Let’s go back to full screen. So I just want to print outthese two fields from my model, so that’s all I’m goingto retrieve, text and author. And the big news here is thefirst one is a full Datastore read op, which is $0.07 forevery 100,000 ops in App Engine pricing. The second query, $0.01 per100,000 ops, 85% less. And this is something that’sreally easy to switch out. But again, one thing to knowabout projection queries is you can’t write to an entityreturn from a projection queries, so somethingto be aware of.You can always, if you findsomething you want to write, fetch that entityand write it. Or you don’t have to replaceit everywhere. But where you can use it,you really should. OK, one of my favorite/leastfavorite anti-design patterns, something that looks reallynatural to all of us– limit offset. And it seems like a good idea. Except what happens in AppEngine when you specify an offset is it scans through thosefirst 10 indexes and then reads the next 10. So basically, you’re almostdoing 20 results worth of work with this query. What we haven’t set in AppEngine is a Datastore cursor. What cursors do is they returnthe place where that query left off to you. So the next time you go to runthat query, you can specify that cursor so it knowswhere to start from. And again, this is somethingwhere you see here 25 fetch ops in the first example becausewe’re doing almost all of the work of the query. And after, we’re just doing the10 that we want, and it’s 47% less, 13 times faster. So now we’re going to get intosome things that you can really do that will helpimprove the efficiency.And you remember one thingthat I talked about was minimizing the amountof work you’re doing when serving a page. So we’ve recently introducedsomething that’s called embedded entities, and it’s away to store structured data within a model. And what this really can helpyou do is denormalize your data in the sensible way. So in this case, what you seeis a contact where I want to store addresses withina contact. So I may want to store theseaddresses elsewhere or use them somewhere else. But I can also store thismodel in my contact. And if I need to put all thaton one page, all the information is there. One query, it’s really great.Now, this isn’t something that’sgoing to save you any money, but it is somethingthat’s going to help your app run faster. And the last thing I want topoint out is don’t index things that don’t needto be indexed. When App Engine serves results,it always serves them for a precomputed indexfor a query. So usually, what we do formost product models on a property is we write all ofthe indexes every time you create or update that. But you know that there’s somethings that you’re not actually ever going towant to query on. In this case, I have somethingcalled a display name. That’s just something thatprobably was supposed to look nice on the page that I wantto store in my model. I’m never really going to queryon this, so why am I going to write an indexrow every time I update this model? So by simply putting index=falsehere, when I update this, that indexrow won’t get written.Now, I won’t be ableto query on it. But in this case,I don’t care. And I’ve also saved myself somemoney because I’m not writing an index. Of course, now that we’vecovered some Datastore stuff, we can do better. Because the data store’s stilla lot of work, and so you can actually add a layer of caching on top of the Datastore. It’s caching, caching, cachingbecause there’s actually three different kinds of cachingwe’re going to cover. The first one is somethingwe always are talking a lot about. It’s Memcache. Memcache is a fast memory cachethat we have available. It’s basically something whereyou can place something that you frequently– that is hard to compute, butyou’re frequently reading, either from the Datastore,or URLFetch, or anything.But we use the Datastore here. So in this case, I took theguest book example, where you’re reading greetings andwe’re serving the greetings every time you hitthe guest book. So there’s no need to pull allthose greetings from the Datastore every time, even ifit hasn’t been updated. So what I can do instead issimply add all of that data to Memcache and pull it whenI look for that page. And if it’s not there,I can always read from the Datastore. Memcache doesn’t takethat long of a time. I do want to point out one thingthat I’ve done in this example, which is instead ofactually storing the models in Memcache, I’ve storedsome rendered HTML. And the reason I do this is agotcha that we see sometimes, which is Memcache pickles yourdata or serializes your data.So what happens is, if yourmodel has changed between writing and readingfrom Memcache, you might get in trouble. This is one way to get aroundit, or it’s just something you can be aware about when youupdate models on your live serving application. The next thing I’m going tocover is instance caching. Well, what’s instance caching? This is basically the memory ofyour App Engine instance, and it’s storing somethingin that memory when the instance is alive. It allows you to have yourown eviction policy. Something I didn’t mention wasMemcache was the least recently used evicted. In this case, storing somethingas a global variable, it allows you to storeit the way you want it. It is also fasterthan Memcache. It’s, well, nearinstantaneous. Memcache is pretty fast, butthere may be some times when you want it to be faster. And in this case, you can juststore it as a global variable in your instance.But for those Python users,really what you need to be doing is using NDB instead. I’ve used usually DB examplesin most of my code samples because NDB is relativelyused. But I’ve actually written herewhat a real Memcache storing and reading algorithmwould look like. I’m not even going to bother togo through it because it’s got like locks and tombstonesand all of this crazy stuff. But it’s actually hardto get this right. Storing things in Memcache,storing things in instance caches, things being written,having to update, NDB actually is doing all of this for you. So if you’re a Python userand you’re not using NDB, check it out. Hopefully, it’s pretty easy totranslate one to the other. And you’ll get a lot ofreally cool savings. I don’t know that much aboutJava, but I hear Objectify has some of the same stuff. So you should checkthat out, too. And finally, here’s just asummary of what the difference is between Datastore,Memcache, and instance caching are.You can see here the latency. Obviously, Datastore hasthe biggest latency. But also, you’re storingthe most things there. And it’s written there, andit’s written near forever unless you chooseto remove it. In terms of the storage,something to call out in instance caching specifically,is we do allow you to have larger size instancesin App Engine. The smallest size is thedefault, but we do have bigger sizes that costs moremoney, that have more memory available. And one thing you could do withthat memory is store some stuff in there to make yourapplication faster. And the last thing I’m goingto cover in caching is something that really will havea huge benefit for pages of your application that arerarely updated with new content, which is set cachecontrol headers. It’s such an easy thing to do. But what it does is whenthat request is sent to intermediate servers andbrowsers, it tells those servers and browsers how longto hold that page before re-requesting it fromthe server. So that means that your app’sactually not going to have to really do almost any workfor these requests. You get those requests only forbandwidth in App Engine.And the thing you want to becareful, of course, is that you can’t ever force thesepages to be re-requested before the expiration time. So please set it reasonably. I mean, we’ve actuallyseen apps that have set it for 365 days. And of course, they setit, and they realize they’ve made a mistake. Well, once we’ve servedthe requests, it’s out of our hands. We can’t force the servers. We can’t force the browserto re-request anyway. So make this a sane value,but still use it. And in fact, if you go in theadmin console, you can go to the drop-down and do requestby type per second. And what you’ll see here is itactually breaks down for this application how many staticrequests, but also, how many requests are servedfrom edge caching. And you can see here that thoserequests I’m only really paying bandwidth for. My server’s not doing any work,or my instance isn’t doing any work. So it’s a great way to saveyourself some money and save your computing power to dosomething more interesting.Now I’m going to hand it offto Greg who’s going to talk about batch API requests. GREG DARKE: So, what arebatch API requests? Batch API requests allow you tomake operations on multiple pieces of data in single RPC. And doing this allows the AppEngine servers to actually fan out your request tomultiple servers. And it allows you to amortizethe cost of that RPC over multiple pieces of data. So what do I mean by that? Well, I’m going to use theexample of Taskqueue here. So say, for example,I’d like to insert 50 tasks into Taskqueue. Just simple code, it goesthrough loops and adds them. At the top here is a graphfrom Appstats. It was introduced by Guido vanRossum in a Google I/O talk in 2010, so I suggest you havea look at that for more information.But in this graph here, you canactually see each of the RPCs that’s made bythe application. I’ve actually trimmed off justafter about the first 10 because it goes down to 50. As you can see, each requesthere is taking about 10 to 15 milliseconds. So overall, it’s over 600milliseconds to enter these actual requests intoTaskqueue. So if I change this codeslightly, so I still construct all the tasks, add them to alist, I’m using the batch API interface, which is q.add. You can see that all the tasksare inserted, but it’s only taking 22 milliseconds. So it makes the requestsmuch, much faster. So we have many APIs thatactually support this– Memcache, Datastore, Taskqueue,Full Text Search.They actually all supportbeing able to operate on multiple entities for Datastore,add multiple documents to an index forfull-text search. And Memcache, you caneither get or put multiple entries there. So some things you will have tobe careful of when you’re using the batch API withTaskqueue, for example, is that you can actually onlyinsert up to 32 megabytes worth of requests there. So using the batch APIs can beslightly more complicated and has some gotchas around the sizeand the number of things that you can add to them. But overall, it’s a considerablewin to both latency and it makes yourapplication faster. So I will just leave batch APIrequests for a second. I’ll go onto using asynchronousworkflows. So asynchronous workflows allowsyou to start an RPC and then continue doing workin your application.This allows you to do multipleslow operations in parallel, such as performing multiple URLfetches as I’ve got here. It also allows you to performCPU-intensive operations while waiting for the IOto complete. This was alluded to by Marziaearlier on with Datastore. When you’re doing a query inDatastore using the .run interface in Python and also inJava, what actually happens is we do an asynchronous callto fetch the next set of entities, so that we’reperforming the IO to find the next entries while you’llstill iterating over the previous batch thatwe returned.So I’m just going togo over a quick– over an example of how to useasynchronous RPCs to speed up URL fetch in this example. I’m going to give twoexamples of this. This is using the slightly oldermethod using callbacks. And then I’ll go overthe new one. So the main thing that you needto write here for using any of the asynchronous APIsis you’ll actually need an event loop that keeps trackof the RPCs and calls the wait_any function, which willautomatically call the callbacks when they’re ready. I have the do-fetch_call downhere, which inside it I have a closure to construct theactual callback. And I’m passing it tomake_fetch_call, which is the actual asynchronous version offetch in the URL fetch API. In other APIs, the asynchronousversion is generally named the sameas the normal version.So if Datastore is get, theasynchronous version is get_asynch, and thisfollows on. URL fetch was our firstsynchronous API and is thus different. So with this example here, I’mpassing the callback into the create RPC function. And so when the actual call isbeing made and the request is finished, this callback willoccur, k and we’ll be given the data. So as you can see, this versionhere actually doesn’t pass anything back fromthat callback back into the main function.That’s actually quite difficultto do using the callback model of programminghere. So I will give you– this is the new versionusing NDB tasklets. The main difference here isthat instead of having to write your own event loop, NDBhas one written for you. And, in fact, it actually makesuse of the new yield statement as co-routines. So all you have to do is wrapthe function with this NDB.tasklet decorator, and thenyou just yield any RPCs.And this allows you to writeprograms that look like they flow straight down but actuallywill be calling into and out of the event loop. So you can see this examplehere does the same thing. It calls make_fetch_call. It just passes theresult back. And we’ve actually got thereturn statement here. So in NDB, with tasklets, youhave to actually raise the return exception, and thatresult gets passed back into the list down the bottom. So if we have a look at theactual before and after. So before, with fetching fourRPCs– sorry, four websites, it actually takes nearly4 seconds. Doing it with an asynchronousmodel, you can still do the processing in between eachof these requests. And in this case, it takesjust over 3 seconds. So a fairly large saving.But if this was in your mainhandler, as Marzia alluded to earlier, this still was probablyway too slow for you. Some of the APIs that supportasynchronous– Blobstore, Memcache, URLFetch,and Datastore. So what you may actually wantto do with requests, if you wanted to gather data from anexternal site and then be able to serve it as a part of yourhomepage request, what you may want to do is actually cachethat data and actually cache it in Datastore. So Datastore, as we saidbefore, is about 50 milliseconds per request. Whereas if you’re looking at thesame thing for doing a URL fetch to a slow externalresource, that can actually take over 2 seconds. So what you may actually wantto use offline processes for is you can actually deferupdating those pages. Because otherwise, when thecache expires with just using a naive solution, you wouldactually still have to make one person wait for that2-second request to be able to gather it again andfill your cache.Other things you can useoffline processes for– Doing slow external, say,billing processes. You can also use it to makeworkflow systems or perform fan-in, which is describedin Brett Slatkin’s talk from 2010. So I will actually go overan example of how to use Taskqueue and actuallydo that URL fetching example I gave before. So I’m using NDB here. The reason I’m using NDB is thatit gives the transparent caching to Memcache andthe instance cache. So if this was a fairly highlyused property and serving on your home page, then thisactually may not have to even go to Datastore at all.It could already be fetched inMemcache, or it could also just be in the instancecache already. So going to the get URLmethod, so I’ll skip truncate time. It’s just a helper methodused to ensure that we don’t fetch– I don’t create too many tasksto actually fetch this. So if we look at the get URLmethod, we construct a key name based on the URL.We’re using sha1, so that way,we can actually ensure that the name space of all thesekeys is fairly evenly distributed. So the first that we do is wejust try and fetch it from Datastore, if it’s there. If it’s not there, we’ll justfetch it straight from the external resource,save it, and just return it straight away. So in the case of the first userviewing this page, they may actually have to pay the2-second request delay here.If you were implementing thisin your own code, you can actually send a 302 back to theuser or send a placeholder so that the page will render. And then it can actually go backto and try and re-request the page so that theuser actually sees something before. So here, if we do find it inMemcache or in Datastore, we check if the data has expired. If it has, we just defera task into Taskqueue. And that will thenasynchronously go and fetch the new version while we returnthe current version that’s in the cacheback to the user. And as you can see, with theupdate cache method, it’s just a fairly simple fetch from theexternal resource and then blindly put it to Datastore. So I will now pass along toTroy who will talk about performance settings. TROY TRIMBLE: Great. Thanks, Greg. [SIDE CONVERSATION] TROY TRIMBLE: Test, test.All right, there we go. Let’s just do it. We’ll do it live. All right. Hi, everybody. I’m a senior software engineeron App Engine. And today I’m going to betalking to you about performance settings that youcan set for your app to optimize for either lowcost or low latency. So performance settingsare something we introduced last year. We haven’t talked to thempublicly as far as I know, like in a conferenceto this setting. So there’s a few of them. They’re on the applicationsettings portion of the admin console. And you can see here thatthere’s the front-end instance class that Marzia touchedon earlier. There’s idle instances, andthere’s pending latency. So the first thing I’dlike to talk to is optimizing for low latency. So why do you want to optimizefor low latency? Well, it’s prettystraightforward. You have a web UI or a mobilefrontend, and you want to have a very, very snappy, good webexperience for your users.So the first setting that youcan use is what we call minimum idle instances. And what this tells the AppEngine scheduler is that you would prefer that it keep aroundat least this number of idle instances at all timesfor your application. And what these idle instancesare used for is to send traffic to them when all ofyour active or dynamic instances are currently busyserving other requests. And so this is extremely usefulfor things like bursty or unpredictable traffic, suchthat you don’t have to suffer the dreaded loading requestfor a user-facing request. And so what we will typicallytell both internal and external partnersto do is to– if they’re going to have somesort of event, like a Google I/O event, or some sort ofpress, or some new feature that they think is going to justdrive a bunch of traffic to their site, that theytemporarily increase this number in order to sort ofbuffer that bursty traffic. And then, later on they canturn it back down again in order to save some more money.The second setting that you canset to optimize for low latency is the max pendinglatency setting. And what this setting tellsthe scheduler is that you would prefer that no request toyour application wait for longer than this amount of timebefore being processed by one of your instances. And so again, this is somethingto tell the scheduler that if you reallywant a sappy user experience that it not wait for aninstance to turn up. But it just automatically– the scheduler shouldautomatically provision a number of instances in orderto keep the average pending latency below this threshold. And so here’s an example of theinstances console, which is also in the admin console.And you can see here that inthis case we’ve set three for the min idle instancessetting. And that is represented on theright by the availability as resident, and then the dynamicones below that are sort of the active dynamic clones. So you can see, as I mentionedearlier, that the majority of the requests to this applicationhave been handled by the dynamic instances andonly a few have been handled by the resident ones. Here’s an example of thesettings when actually setting the min idle instances andthe max pending latency. In this case, the admin consoleis warning me because I did not appropriately set thewarmup request setting in my app.yaml orappengine-web.xml. And this is a requirement formin idle instances to be used appropriately bythe scheduler. And that’s because the schedulerwill spin up these idle instances in the backgroundusing what we call warmup requests.And you can read moreabout that online. I’d also like to touch on,just very briefly, a new feature that just cameout with 1.7.0. And that is the page speedservice integration. And I won’t go into great depthhere, but it is a part of the performance settings. And what it will do is basicallywill cache and do content rewriting for a numberof your static resources in order to, basically speed upyour service data content. So next I’d like to talk tooptimizing for low cost.And what I mean here, I meaneverybody wants to optimize for low cost. But in the performance settingsinstance, what we’re really talking about here areusers who are hobbyists, or people who have applicationsthat they’re not really monetizing their traffic. They’re running this thing forfree, and they want to just try and keep their costs lowat the expense of having potentially higher latencyuser requests to their application. And the first settingthat we have here is max idle instances. And the idea is that you’retelling the App Engine scheduler that you would preferthat it not keep around any more than this number ofidle instances to help serve your application. And this also factors into howwe calculate the front-end instance hours chargedfor your application. Because if you told us I onlywant three idle instances. And for whatever reason, thescheduler put up 10 idle instances, well, then youshouldn’t be charged for the extra 7.The second setting here isthe min pending latency. And what this says to thescheduler is you would prefer that the scheduler allowrequests to wait at least this long before causing a loadingrequest and spinning up a new instance to serve the request. And what this does is basicallymaximizes your utilization of the instancesthat you already have to try and keep costs low. So again, here’s an exampleof me setting these. I’ve set the max idle instanceto 3 and the min pending latency to 750 milliseconds. And at this point, I’d like tojust take a brief moment explain why I have specificnumbers here of 3 and 750 milliseconds. These are not generallyapplicable to all applications. They are just examplesthat I’ve chosen. The right number for yourapplication is completely dependent upon yourapplication. And we encourage you to tryand experiment with these settings in order to find theappropriate value for your application. Next, I’d like to talk about avery serious anti-pattern that we’ve seen far too manytimes in the wild.And this is basically usingboth the low cost and low latency settings togethersimultaneously for your application. And the problem with this–and you can see what I’ve done here– is that both min and maxidle instance is 15. And the delta between the minpending and max pending latency is 50 milliseconds. And the problem with thesesettings is that it confuses the App Engine schedulerso much. It just doesn’t know how tooperate within such a thin margin of these thresholds.And what you see, as the AppEngine admin, is that your application– the scheduler will spin upthese instances for your application, but they’ll beextremely short-lived. They’ll come into existence andgo out very, very quickly while it tries to basicallysatisfy these two thresholds. And it basically makes a bunchof churn in the system. And it also will devalue thingslike instance caching because your instance cachewon’t be around that long. So our recommendationis to not do this. Our recommendation is to dosomething like this where you use just the low cost or justthe low latency settings, and you set the other onesto automatic. So now I’d like to talk aboutApp Engine servers, which is an ongoing project right now,and how it’ll affect the performance settings that youcan set for your application to make them more powerful andmore flexible going forward. But first, I’d like to talkabout the motivation for why we’re doing this. And the only way to do that isby talking about what today’s App Engine hierarchylooks like.So as you can see here, we havea top-level application. Each application has someset of servers– or sorry, excuseme, backends– and some sort of versions. And each of those has someset of instances associated with them. And as a lot of you know ifyou’ve been playing with the performance settings before,only the default version receives the performancesettings that I talked about just now. And so what we’ve heard fromcustomers, from talking to them and getting feedback onthe architecture as it is today, is that one of the thingsthey really don’t like is that they only have a singleversion that will auto scale for them. And only a single versionwill receive the performance setting.So all the non-default versions,they don’t get these performance settings. So what we’ve done is we’regoing to move to a world that looks more like this,basically. We’re going to introducea new level, which is called a server. And what a server is, it is a conceptual grouping of versions. The idea being though, that inthis new world the default version of a server will nowbe able to receive the performance settings and havethose be active and also be able to do autoscaling.So you can have multiple serverswith multiple default versions or a single defaultversion per server, but each of those will be able to scaleindependently of one another. So, for example, you can havea server with a default version that is for yourpatchwork, for your MapReduce. And those performance settingscan try to utilize the instances as much as possiblebecause all you’re trying to do is get throughput. You don’t need low latency,you just need throughput. Similarly, you continue to haveyour web UI frontend that has extremely low latency andhigh min idle instances in order to handle bursty trafficand give a really snappy user experience.And so to give an example ofwhat these might look like in the future, like how we’regoing to do performance settings in the future, is thatwe’re going to actually take them from the admin consoleand put them into your app.yaml or your AppEngine web.xml. And so when you upload yourcode, you will also upload the performance settingsthat you feel appropriate for that code. And to give an example of that,here is, for example, my mobile frontend server. So you can see here this is inapp.yaml, but there will be the equivalent in theApp Engine web.xml. And you can see here, though,that there’s a new field called server.This is my mobile frontend. We’re going to havealso a new section called server settings. And so you can see here it’sexactly the same settings that we had in the admin console. In this case, I’ve set to an F2and 25 min idle instances and the max pendinglatency of 250. The automatics are there justfor completeness, but are actually unnecessary. Those are the default values. And so this is an example ofwhat today is traditionally called a backend. This is what it will looklike in the future. We’re actually going to movethe settings from what is today in backends.yaml orbackends.xml and we’re going to put them into the actualapp.yaml or App Engine web.xml going forward.And so, in this case, you seethis is my geo backend server. It has 10 instances. Static, just 10 instances,no autoscaling. And it has an instance class ofB8 because I need a bunch of memory to do a bunchof great geo caching. And so just to recap, Marziacovered a lot of great Datastore modeling patterns andanti-patterns, along with a number of cachingtechniques. Greg talked about the batchAPIs, some of the synchronous APIs that we have and someoffline processing techniques.And I covered today’sperformance settings and what they’re going to looklike in the future. And so with that, we’d like tothank all of you for coming. It’s been a pleasure totalk to you today. And we would like to open upthe floor for questions. I’d like to say very brieflythough, if you have a question that pertains specifically toyour app that you don’t think would apply to other people inthe room, that you come find us afterwards given that thisis kind of a forum where we want to answer general questionsas much as possible. So thank you very much. [APPLAUSE] AUDIENCE: You were talking aboutjust selecting certain fields and that it’smuch cheaper. I’m wondering why– MARZIA NICCOLAI: Project–hello? It’s on? Oh. Projection queries? AUDIENCE: Yes. Could it be also cheaperthan a get or is it just for querying? MARZIA NICCOLAI: They’re bothsmall Datastore ops, so they’re basically equivalent. But obviously, if youare going to– I mean, you should alwaysuse gets first. But if you have to do a query,then using a projection query is going to save you money. I mean, if your entityis sufficiently small, it’s the best. Because sometimes projectionqueries can take slightly longer if the entityis too large. But for only a few fields,it’s a small op versus a read op. AUDIENCE: And if you do queryand you do it keys only, and then do the gets separately andinterleaf with Memcache, would your recommendation bejust to do a projection query or do a key only query and thentry to find the entities in the Memcache? MARZIA NICCOLAI: So in thatcase, if you’re doing a keys only query and then a get, it’stwo small ops versus the one small op for theprojection query. So I believe you’d still bebetter off, in that case, to do projection queries.Oh, and also, we have chocolate,like the last session for people whoask questions. So if you want afterwards, youcan come up and get some. AUDIENCE: Just curious if youhave some debugging tips for slow queries? I often find, especially ifI’m fetching a lot of information, it seems to begoing really slow, and it’s kind of hard to debug becauseit’s somewhat asynchronous so you don’t really see the costuntil later when you’re using the information. And there’s not a lotof visibility into– under the hood, it seems likewhen you’re making queries that are within a bunch ofkeys, there’s lots of situations where it’s ultimatelymaking multiple individual fetches and not doingthe sort of batch fetch you might expect a realSQL database to do. So just, in general, howdo I go and look and see, why is this slow? Is it really hitting an index? What is it that’staking so long? Is it the amount ofdata that’s being transferred, et cetera.Because it feels a littleopaque to me sometimes. MARZIA NICCOLAI: I think queriesalways hit indexes. But– PETE WHITE: [INAUDIBLE]. MARZIA NICCOLAI: Right. But I think in thiscase when– so I’m not a super Datastoreexpert, but like zigzag queries where you’re joiningmultiple things. I mean, I’m probably not theright person to answer this in detail, maybe Greg. GREG DARKE: I’ll just say if youwant to come and chat with us afterwards, we’d probably beable to help you with that. AUDIENCE: OK. But I just wonder, are therebasic pages on the stats or something that show you,kind of here are the queries you’re making. Here are the slow queries, orhere’s how long they’re taking in general, or here’s how muchdata you’re transferring, or how many different RPCsyou’re actually making on the backend? TROY TRIMBLE: The Appstat stuffthat we were showing those, where like most of theclips in here with sort of the timelines and stuff like that,which most of them were in Greg’s talk, that’s an extremelyvaluable debugging tool that you can do on asingle request basis.And it does have– at least for Python, it has afull stack dump where it will drop you to the exact lineof code that executed it. So you really can drill down andsay, like, oh, this query took a really long time. It executed out of orderwith this other one. And so this is the one Ineed to really find. And it will tell you what theline of code that is. AUDIENCE: Cool. MARZIA NICCOLAI: ButI also think– I mean, so there used to bemore queries that required composite indexes that thenthey made not required. But you can stillre-add composite indexes for some queries. Those will write more indexes,but they might be faster. But again, it’s like there’s alot of dependencies there that I’m probably not the bestexpert to tell you. But that is one option alsowhen it used to require a composite index and nowno longer does. Adding back the composite indexmight speed up the query in that case. AUDIENCE: Thanks. GREG DARKE: Up the back? AUDIENCE: Yeah, Ihad a question.I’m wondering if you had anyplans of reducing the pricing maybe or maybe offering tierpricing for things like games where it’s harder to monetizeat the rate that the current pricing is. TROY TRIMBLE: I mean, we haveno plans on reducing pricing right now as far as I know. We’re always evaluating. We know that we just came outwith new pricing for Compute Engine and stuff likethat that is tiered.So that seems to be the way thatthings are going, but we have nothing to announcetoday, unfortunately. AUDIENCE: Great. Thanks. I’ve noticed whenever Iconfigure a resident instance with the min idle instance, myapp, at like 2:00 in the morning when no one’s hitting itand I have an idle instance that I’m paying for, if I hit upmy app, it prefers to just spin up a dynamic instanceinstead of kind of using what I’m paying for.MARZIA NICCOLAI: So that minidle instances are really more geared toward handlingbursty traffic. So we always preferdynamic instances. But we keep the idle instancesaround in case we haven’t predicted correctly whether ornot there’s enough dynamic instances around to serve it. So that’s really the purpose. What I always kind of use inan example is, if you’re launching a website and youmight have a really steep, really sharp jump.And it’s hard for us to predictexactly how that’s going to go. So then you might want to spinup a lot of idle instances because those instancescan serve a request kind of in a pinch. But when you have traffic that’smore steady state or easy kind of flowing traffic,the idle instances aren’t as useful because we’re usuallyreally good at predicting when to do a dynamic instance. AUDIENCE: So it’s for burst,not start-up time? MARZIA NICCOLAI: I mean,it’s for both.But you should think of it assomething that can serve a request while the dynamicrequest is spinning up. AUDIENCE: Bingo. OK, cool. Thank you. AUDIENCE: Hello. Thank you for App Engine. And you said not to use limitand offset, obviously because that’s very inefficient,right? TROY TRIMBLE: It’s just reallyoffset in that case. AUDIENCE: Oh, offset. That’s right. TROY TRIMBLE: Limit’s fine,just no offset. AUDIENCE: I meant that, yeah. But what about if you want topage back and page forward? Cursor, you use cursor, right? MARZIA NICCOLAI: There’s a startand end cursor, both. I just use the start cursor. AUDIENCE: Right. Start cursor. But let’s say you have a pay andyou want to go backward. Can you go backwardsand forward? The cursor only goesforward, right? TROY TRIMBLE: No, you can setan end cursor on a query. That’s what she’s saying. So by default, it’s like there’sstart_cursor and end_cursor. So you can say– I believe it’s at the end, like,basically of your query, you should have that cursorshould be the end of it, I believe.I know I’m not explainingthat very well. AUDIENCE: There’s got tobe a better way of– AUDIENCE: Yeah, yeah. But you can’t always– I can’t always do that. I’ll ask after. AUDIENCE: Is serverconfiguration enabled now or next version? MARZIA NICCOLAI: So the serverconfiguration is just what we’re working on implementingnow. And one of the main reasons tobring it up is because– I mean, I think we really arelooking for feedback on how that might be the most usefulfor you and give you an idea of what we’re thinkingabout in the future. We don’t have a targetrelease date, per se. TROY TRIMBLE: Yeah, we’reworking on this actively right now. If you’d like to talk more aboutit, I encourage you to come find me after we’re donehere, and I’d love to hear feedback that you haveon it or thoughts. Definitely. That goes for everybody. AUDIENCE: Is there any way tovisualize pending latency? The one that you don’t see inthe request and to me it seems it’s nowhere reported.Also on the dashboard, right? MARZIA NICCOLAI: It’sin the logs. And what I would actually plugfor here is actually we don’t have this– we actually haven’t writtenthis, but it’s probably really useful, is we have a Logs ReaderAPI, which actually allows you to either MapReduceor read through your logs. And you could do something likeread the pending latency. Read all your requests, theinformation about your requests, dump it into somethinglike BigQuery, or even write a custom kindof visualization for those kinds of logs. I mean, the raw tools for thestuff is here, but it’s kind of very nascent in terms ofmaking these really easy to use for you. But it’s possible. And I think it’s really excitingto be able to read your logs, dump them intoBigQuery, do queries.And I think it’s somethingwe want to enable better in the future. But obviously, you guyscould do it now. And there’s alwaysopen sourcing it. I mean, not to plug it, butthis is something I really want to use and haven’thad time to write. If you wanted to writeit, you could. And it would be really useful. But for your own applicationand for other people’s applications. TROY TRIMBLE: Yeah, well, it’snot well visualized in graphs or anything like that, likeon the dashboard. Like she said, it is not only inthe Logs API, the data you get back from there, but it isin the Logs Reader in the admin console.It’s right in the first lineof every request, tells you like a pending– it’s like pending_ms. And it tells you. MARZIA NICCOLAI: If it sat inthe pending queue, it’s pending_latency in the logs. AUDIENCE: I’m curious about theimplementation of embedded entities and how the performancecharacteristics would compare to other kind ofstraightforward options that you have right now, likeJSONifying an entity, or pickling it. GREG DARKE: So the embeddedentities are actually just another Datastore model that’sjust been serialized and then stored in a blob field.Sorry. For local structured property,they’re stored in a blob field in the object. And you can actually compressthat if you want. For structured properly,we take the name of the properties, and we mutate themso that we can actually still allow you to do queriesover that. So as for the performance, theserialization time is the same as it would take to serializethe two entities. The right performance for localstructured property is just the amount of time requiredto put that data. With local structuredproperties, because there’s no extra indexes, you don’t haveto pay the time waiting for those indexes to be writtento propagate. But other than that, yeah. AUDIENCE: So you talked aboutlocal entities, local structured entities, which yousaid is linearized data. So what happens whenyou write it back? Does it again redistributeit and write it? Or do you write itin one place? Let’s say, for example, you haveblog post and you have profile information. Profile information beinga local entity into– MARZIA NICCOLAI: So itonly writes it in the place you write it.I didn’t really cover how youmight use this, like fully to denormalize. It’s just to point out how youcan store a structure. When you put it in the oneentity, it just writes that one entity. It doesn’t updateit everywhere. You still have to dothat yourself. But it’s really nice because itgives you a nice, logical way of understanding andstructuring this to facilitate denormalization, which obviouslyhelps with queries. So I didn’t coverthat part of it. AUDIENCE: The max limit ofwriting 100 or 1,000 entities per transaction still applieswhen you write it in– GREG DARKE: So with localstructured property, it actually only countsas a single entity that you’re writing.But you still have toadhere to the one megabyte per entity limit. So you can actually have asmany values in that local structured property as you want,as long as they will fit within the one megabyte sizelimit of the entity that contains that local structuredproperty. AUDIENCE: So if I have athousand blog posts and if I write one profile which is1 MB, it’s still fine? So you don’t have to worryabout writing it as 1,000 megabytes, right? Am I making sense here? MARZIA NICCOLAI: I’m not surethat we understand exactly. So the structured property isstored within the one entity. AUDIENCE: Think of it likea blog post and a profile information. A profile information livesinside the blog post because you want to denormalizeit just in case you want to query. So if you have to write achange to the profile information– MARZIA NICCOLAI: You’d haveto write it everywhere. AUDIENCE: Oh, OK. AUDIENCE: So I have a questionabout queries. Let’s suppose I have anentity with 10 fields. I’m assuming if I query all 10fields, it’s a full-entity read, right? What determines the threshold,if it’s a small read or– MARZIA NICCOLAI: If you usea projection query and you actually call out all 10fields, it’s still just the small op. The thing that might happenthere is the index scans might take a little bit longer.So it might not be faster,but it will be cheaper. AUDIENCE: Awesome. Thank you. MARZIA NICCOLAI: Wait, 7? AUDIENCE: [INAUDIBLE]. MARZIA NICCOLAI: No, no. It’s one small op. It’s one small op. We had Alfred. AUDIENCE: There was an issuewhere, I forget what version, but the static files in the logwere no longer showing the refer in the access log. Is Page Speed going to fixthat, or is that going to still have that issue? AUDIENCE: [INAUDIBLE]. GREG DARKE: Do you wantto answer it? MARZIA NICCOLAI: Page Speed,I think, is orthogonal. TROY TRIMBLE: Meet PeteWhite, everybody. [APPLAUSE] PETE WHITE: For that specificissue, I know the thing there was that we do not get everypiece of information that we have from normal requestsfor edge cache requests. So if you have billing enabled,you’re serving a static resource, and it’s comingout of our edge cache. We don’t necessarily have theresponse size, or the refer, or I think like the useragent field, so some of that’s missing.MARZIA NICCOLAI: So Page Speedis actually our integration with the other Googleservice Page Speed. So that is totally kind ofunrelated to this issue. What that does is that uses thatservice to compress like CSS files and sometimesJavaScript files and serve them from the pagespeed cache. So it’s really quick. So these two issuesare not related. AUDIENCE: Just a small commenton my question before. With graphing pending latency,I just checked. Actually, the latest ProdEaglelibrary allows you to graph out the pending latency.Just as FYI for everybodywho was wondering. MARZIA NICCOLAI: Cool. GREG DARKE: Great. AUDIENCE: Thank you. GREG DARKE: Thanks. AUDIENCE: If you’re alreadyminifying and combining your JS and CSS files, does PageSpeed give you any advantage? TROY TRIMBLE: I’d refer to thedocumentation on this one. I know that basically they havea number of options that you can choose dependingon what you want to do. For example, there’sCSS inlining. And I don’t know if theydo spriting or anything like that. But, I mean, basically eachindividual option does some optimization that is separatefrom the others. So it really depends on whatyou’re looking for. AUDIENCE: Would it be anadvantage putting it in the Page Speed cache? TROY TRIMBLE: Yeah. I mean, caching isalways good. So yeah, I mean, it would be anadvantage to put it in the Page Speed cache.Yeah. AUDIENCE: OK. I’ll experiment with that. Thank you. PETE WHITE: I think there willbe better documentation forthcoming, but the Page Speedservice is very, very similar to the open sourcepage speed– Mod Page Speed Apache plug-in. So the sorts of things thereare exactly what we’re exposing, just builtinto your app. AUDIENCE: I’m already using theGoogle Closure Tools that does a pretty goodjob already. That would be nice. GREG DARKE: Thankyou very much. If anyone else wants to– MARZIA NICCOLAI: Yeah, if youasked a question, remember to come get your chocolate bars.And if you want tocome talk to us individually, we’ll be here. [APPLAUSE].

As found on YouTube

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like