JAMISON: Joe, are you welding?
CHUCK: That’s a lot of bacon you’re cooking there, buddy.
JAMISON: Hello, friends.
CHUCK: Aimee Knight.
CHUCK: Joe Eames.
JOE: Hey, everybody.
CHUCK: I’m Charles Max Wood from DevChat.TV. Quick reminder: don’t forget to get your tickets to JS Remote Conf, January 14th through 16th. We also have a special guest this week, and that’s Nik Molnar.
NIK: Hey everybody. Thanks for [having me back] on the show.
CHUCK: Do you want to introduce yourself again?
NIK: Sure. I’m Nik Molnar. I am a Program Manager at Microsoft where I work on my web diagnostics and debugging tool called Glimpse. And I live in Austin. There you go. Everything you need to know about me.
JOE: I got to say, you’re one of the most awesome guests simply because your name is closer to Mjölnir than any other guest we’ve ever had.
NIK: I used to get called Nik P.I. quite a lot because back in the X-Files days, people thought that I sounded like Agent Mulder.
AIMEE: That’s awesome.
JOE: Yeah. I’m going to call you Thor from now on.
NIK: I will take that. I can be a Norse god for you.
CHUCK: Alright. Well, do we want to pick up where we left off with web performance? I guess this is part two.
NIK: Yeah, let’s…
CHUCK: Were there questions that we had in the hopper?
NIK: You’re absolutely right. And it really can be. The guys over at Google have actually put together a little acronym that they call RAIL, R-A-I-L. It stands for Response, Animation, Idle, and Load. And those four components together are how they holistically look at web performance. And you’re right. We spent a lot of time looking at load last time, because load is still the biggest bottleneck that we have on the web. And things are getting better. But there are the best practices and the most information published around how to improve load. And there’s the most amount of tooling around that.
JAMISON: So, can you elaborate on that a little bit? You said that when you touch the DOM, it kicks off a bunch of extra stuff. How do you know what is being kicked off and how to avoid that?
AIMEE: As long as we’re talking about the DOM, one thing that I’ve heard talked about a lot recently, and I had to go look it up because I wasn’t completely familiar with it, was frame rate. So, I was hoping that if you wanted to go over that in case there are other people who are not quite sure what this means, since it’s talked about so much right now, and why the web animation API is important to that?
NIK: Yeah, sure. So, frame rate is this really interesting thing. I was like you, Aimee. If any of the audience is video gamers, frame rate is something that they’re used to. When I was in college, my roommate, they would go and they’d buy some new computer and they’d be like, “Oh, I’m playing Max Payne and I’m getting 60 frames a second,” and, “I’m getting 45 frames a second.” I’m like, “That’s cool, man.” I play Duck Hunt. How many frames per second are in Duck Hunt? It’s just not something that I really ever thought of until more recently, until we got the tooling and we started to get some documentation around what we need to do.
So, the screen that is attached to the device that you’re listening to this podcast on, so maybe your phone or if you’re sitting in front of a desktop computer, a monitor. There’s a pretty good chance that that screen is running at 60 hertz. What that means is that within every second, the screen refreshes everything that it’s showing. Now, that doesn’t mean that you’ll see a change on the screen. It’s just that the hardware is sitting there refreshing it 60 times every second. So, when we start to think about a frame rate, that’s how many times within that second can we actually change the image? Now, the hardware is trying to do it 60 times every second, at 60 hertz. And for a really nice buttery smooth experience when animation doesn’t look like it gets janky or jaggy or you start to see stuttering, then we also want those animations to run at 60 frames a second, optimally.
And reality is, if you can’t hit 60 frames a second, you just want it consistent. So, maybe it’s 24 frames a second, maybe consistent at 24 frames per second, instead of your application going faster and slower in the middle of my animation. So, if you want to see what it looks like to animate things at different frames per second, there’s kind of a cool website out there. It’s frames-per-second, that’s ‘frames dash per dash second dot appspot dot com’. Now, what that allows you to do is add different types of sports balls, like a baseball or a soccer ball or a basketball, to the screen and then say that you want to animate it across the screen and how many frames per second. And so, if you throw a couple of balls up there and you look at what 24 frames a second versus six frames a second or something really bad like five frames a second looks like, you can see the jank and you can see the stuttering. And if you’re on a mobile device particularly where that animation might be tied to a touch event, so like imagine you’re on your… I’m on my iPhone and I’m trying to scroll through a page. And I put my finger down on my phone and the pixel sticks right there on my phone. As I move my finger up the page, it’s sticking right onto my finger, that’s perfect. That’s a nice smooth 60 frames per second animation. But if it’s kind of playing follow the leader, like I move my finger up to the top of my phone and then the page catches up to my finger, that’s the stuttering and that’s the jank. And that’s what we don’t want. So, jank is the word that people in this community use when we’re getting bad frame rates.
So, why we started talking about frame rates and why it’s important to performance is because this frame rate and this goal of 60 frames per second really dictates the amount of time or the budget that we have to get work done. So, if we take one second which is a thousand milliseconds, and we divide that by the number of times the screen is going to be refreshed, that’s 60 hertz, a thousand divided by 60 comes out to be about 16 milliseconds, which means that my screen is being updated about every 16 milliseconds.
So, if I’m doing some parallax effect or some animation where I’m trying to update the screen more than every 16 milliseconds, or I’m doing work more quickly than that, it doesn’t really matter. I’m just doing extra work that I can throw away, because the fastest the screen is going to be able to update is every 16 milliseconds. So, maybe there’s a scroll event that’s firing and I fire off five of those scroll events within 10 milliseconds and I’m trying to update the position of some element five times within that same 10 milliseconds. It doesn’t really matter. The user isn’t going to see any but the last update, just because the hardware is not refreshing until 16 milliseconds.
Now, there are some properties like opacity for example that have been optimized for the GPU. Your CPU doesn’t really have to do a lot of work there. It’s just optimized for the GPU to do something extremely efficiently. And in those cases, we consider those to be jank-free properties. Some of the transforms have this exact same property to them. You don’t have to do a new layout. You don’t have to do a new CPU paint. You can just do a GPU paint. And those are very smooth, very fast. And if you can keep all of your animations to do transforms where you’re rotating or skewing or moving something by using those properties, or fading the opacity, then you’re going to get a very nice, buttery smooth experience.
So Aimee, that was a very long answer to your question. Do you think that that clears things up? Is that what you found when you were exploring this stuff on your own?
AIMEE: Yes. That helps.
CHUCK: I just want to clarify one thing and that is when I hear frame rate I’m usually thinking video games where you’re repainting full-on image stuff. But this really does apply to other animations in web pages that we see. For example, things sliding up, sliding down, sliding in, fading out, all that stuff.
AJ: Yeah, I’ve seen tons of sites with parallax where I’m trying to scroll on my MacBook and it’s like, [makes slow sound effects], and I can’t get down the page because it’s so janky.
JAMISON: Yeah, can you talk about that specifically? About why sometimes scrolling is so janky and what to do to solve it. It seems like the general, just like last time you talked about the general advice is do less. Is it the same kind of thing, just make less stuff happen when you scroll?
NIK: It is do less, particularly if you’re trying to animate while scrolling. That’s kind of a big no-no. A quick and dirty way of getting around that is if you know you have some animations going on your page, when the scrolling starts, stop all of those animations. Make the whole page basically static. And then when the scrolling ends, restart. Now, that can be computationally expensive as well. But basically you want to try to avoid that.
CHUCK: Get on that folks.
NIK: [Chuckles] Yeah, exactly. But the problem is because we have these 16-millisecond budgets and they might click a button 8 milliseconds into our 16-millisecond budget, we’ve just cut our budget in half. And so, now we have even less time to do all of that work. But what ‘request Animation Frame’ does is it says, “Hey, I just finished painting a frame. Go ahead and start some work now,” giving you the most amount of that budget possible. And so, on a scroll, what you might do is just capture the state that you need to. So, maybe like the window’s Y position. And you might squirrel that away into some closure or something like that and only update the screen with that scroll position or move the element or do whatever you need to when the page is ready to be repainted again.
AIMEE: I had to go answer a Slack message really quick earlier. Did you talk about how using the Web Animations API helps?
NIK: I did not talk about the Web Animations API.
AJ: So, I wanted to say, obviously having consistency is super important. Like if you’re going to do 60 frames, do 60 frames. There’s nothing wrong with 24. That’s how a lot of movies operate. But I also have noticed recently that there are a lot of trendy animations that make it actually less accessible. Because one thing that humans do is, just like t-Rex, we recognize movement, right? And if movement’s too smooth, then we don’t notice it. So, I’ve noticed recently, YouTube just made their pause and play button, you don’t get a simple quick notification that it’s changing state. It slowly and gently morphs. And with a lot of people, their eye acuity is based on movement. So, if your transitions are too smooth, people aren’t going to get the information that you’re trying to give them because it doesn’t trigger a change in their brain because it’s just too fluid.
NIK: Yeah. There’s certainly a lot of accessibility, but even just more like human-computer interaction or human factors to take into effect with animation. I remember when Ajax was really first lighting up and becoming a thing. There was a lot of talk that any time you brought new data on the screen, I don’t know if you guys remember this, it would flash yellow behind it and then the yellow would fade away. And that was all the usability experts were saying. “Well, look. Hyperlinks are blue and they’re underlined and everybody understands that mechanism. So, when you change the page unexpectedly, we’re going to do this yellow flash and that’s what we’re recommending.” And that was popular for a little while until everybody kind of said, “Well, that’s butt ugly. So, we’re not going to do that.”
But you’re absolutely right. Animation will draw somebody’s eye for good or for bad, which is why every single ad is always animated, because they’re trying to draw your eye over there. And so, it needs to be used appropriately and effectively.
AJ: Yeah. I’ve found that personally my reaction time, if something is supposed to let me know that there’s a change, I need about a hundred-millisecond lapse between the action and the change. Maybe a hundred, a hundred and fifty. If it happens too quickly then I, I personally get confused or it doesn’t quite fit with me. And if it happens too late, if it’s like 500 milliseconds, then it seems laggy.
NIK: Yes. Yeah, so 100 milliseconds, humans perceive to be instantaneous. So, we talked about this on the last show a little bit. So, if I click on a button or something like that, or I touch the screen or resize something with my fingers, the system needs to respond within a hundred milliseconds. It doesn’t need to give the full response but it needs to indicate to me that my gesture has been accepted or I’ll think that the battery in my mouse is dead or something like that. But you’re certainly right. Things can happen too quickly and people don’t believe it.
So, my buddy Troy Hunt in Australia, he’s a security expert, he runs a website named HaveIBeenPwned.com. And every time that Sony or Adobe or Target or whatever company announces there’s a data breach, he collects all the data and indexes it based on email address. And you can go and search for your own email to see if your account was compromised in whatever the latest security alert was. And he actually had problems with his system operating too quickly, because he was doing some very efficient caching and things like that, where users would believe that the system wasn’t up to date yet because they were getting their response from the searches way too quickly. So, he had to add some artificial delay in there.
AJ: I heard Hipmunk did that, too. I think it was Hipmunk or one of those flight sites.
NIK: Yeah. I would say that a very overwhelmingly large majority of people are in the opposite camp where things are too slow.
NIK: Not too fast. And if you are getting into the too fast situation, good on you. That’s a very good problem to have.
AIMEE: I think it was a Fluent Conf talk by Ilya Grigorik, if I’m saying that correctly. He talked about, and I’ll put a link in the show notes, but he talked about something they did where they sped it up and users actually didn’t realize that the app was working as it did because users were expecting a delay and there wasn’t a delay there. But I’ll try to find it and put it in the show notes because I thought it was an interesting talk.
NIK: Yeah. You know, the whole idea of speed and that is very interesting. So, Tim Kadlec does a talk that I saw once where he kind of equates, this gets back to the animation and how things flow, to choreography. And it’s not… he had this really interesting point where people don’t perceive dance or ballet to be beautiful because the dancer holds some perfect pose where she’s standing on the tips of her toes. Or then, I don’t know the names of these moves. Aimee [might go] and help me out.
NIK: But then you know, jumps into the air and makes an X symbol, I’m pretty sure that’s the technical name for that, X symbol in the air, that’s not what’s awesome, is that they can hit those two things. What’s awesome is the fluidity and motion, the choreography between those two points. And I really like that. If we think about, I’m looking at an item details page and I hit ‘add to cart’, well you know, first I’m looking at a widget and now all of a sudden the number over the cart icon becomes a one. But there can be this beautiful choreography where we transition that over there and it makes sense to the user. And it isn’t jarring and it’s not irritating and it’s not too slow and it’s not too fast. But that’s exactly what choreographers do when they’re putting together a group of people who are going to do a performance. And so, I think that there’s probably room for specialization within the UX world of UI choreographers.
AIMEE: So, switching topics a little bit, one of the questions that I had was about how if the new HTTP 2 spec will change some of the optimization approaches that people are currently doing right now?
NIK: It certainly does. And Ilya, who you mentioned earlier, is kind of the go-to guy on this. He has a presentation that he did at Velocity Conference around exactly that, that your old best practices may not apply anymore. And I’ll put the, I’ll hand over the link for the show notes for this. But essentially, some of the best practices that we’ve known for a long time, the ones that Steve Souders wrote in his books around HTTP 1 were things like reduce DNS lookups, reuse TCP connections, use a CDN, minimize number of HTTP redirects, use as few bytes as possible for what you transfer, compress, cache, and any resources you don’t absolutely need, get rid of. All of those best practices that I just rattled off, those stay the same between HTTP 1 and HTTP 2. So, if you’re doing those things, continue to do them.
To complicate things a little bit further though, the truth may be somewhere in between those two. Ilya just kind of says to remove them. And I can see his point. But in the world we are right now where users are probably split and some of them are using HTTP 1 and some of them are using HTTP 2, you kind of have to figure out, how do I evolve my performance practices? You’re not just going to switch from the 1.1 best practices to 2.0 best practices. And so, for sharding in particular, you can do some smart sharding where that img1 and img2, if both of those host names resolve back to the same IP address, well that’s good enough for the trick to work in HTTP 1.1 clients but in HTTP 2 it will see those two hosts, know that they go to the same IP address, and it will merge that onto one connection. So, just being a little careful about your DNS configuration, you can get the best of both worlds in HTTP 2 and in 1.
Similarly with concatenating resources. We’ve always concatenated our resources to reduce the number of HTTP requests that were being made. But that came with some drawbacks. If I was concatenating together four or five different scripts and one of those changed, I had to invalidate the caching for all of the scripts because they were all bundled together. The browser just saw it as one. And so, because HTTP 2 allows you to send multiple responses down on one connection, that’s not as important anymore. And so, there’s been a lot of advice out there to no longer bundle things together. In HTTP 2, don’t bundle things is what’s being said.
But now that people are starting to try that out, they’re realizing that there are some other benefits of bundling besides just getting around TCP connection limits. So, Khan Academy just within the last couple of weeks put out a post on their blog where they noticed that when they were bundling their files together as they always have in the past, compression ratios were much better than if they individually compressed each file.
NIK: So, if they had 10 files and you bundle them together and then compress, you get a much smaller number of bytes at the end than compressing the 10 individual files. The compression contexts are just so much smaller. And so, they’re coming back and [inaudible] once again, what’s the happy medium here? That happy medium is probably, well bundle some stuff together. If it belongs together, maybe it should be bundled together even though you have the HTTP 2 advantages, just for compression reasons.
AJ: So, I want to ask. It doesn’t seem like there’s a standard way to, and maybe this is a different topic but the server push allows you to say, “I know that you’re going to request these assets so I’m going to go ahead and give them to you.” And it seems like that is completely application-specific, not like there’s a standard way of saying, “The server’s going to parse and cache your HTML file and it’s gonna serve those up.” But it’s like, your application has to decide to do that, or what’s the best way. And so, a lot of features of HTTP 2 aren’t ready for use yet. Is that, is my understanding correct or incorrect?
So, to your point AJ, exactly that scenario that I just said where we’re pushing down logo.png and screen.css, the application has to know about that and actually has to say, “Well, index.html gets those files and I need to push them here.” And that’s true. At this point right now, HTTP 2 is, the spec has been implemented and technically those things are feasible, but the best way of exposing that to web developers is still being worked out. So, I know that there’s a Java web server, I’m pretty sure Jetty is the one that does it, there’s a Java web server that keeps analytics of requests and response by looking at referrers. And if it notices logo.png and screen.css is always also being immediately followed up after index.html, it will start to build up a heuristics model and automatically do the push. So, the web developer’s not changing their application at all. They just get these freebies because they’re on top of that server.
And so, I think that over time we’re going to start to see frameworks and servers and the intermediaries between the code that I write and what my users use in the browser, it will start to get smarter about these things. But I think as of right now, it’s basically manual with a couple of people who have figured out ways to automate it in some scenarios.
NIK: Yeah, exactly. That’s funny that you mentioned that. That’s actually a feature of HTTP 2 that a lot of people don’t talk about, is the client now has a way of canceling a request that it’s made. In the past, the only way it could cancel was to terminate the TCP connection, which was very expensive because it adds a lot of overhead. But if your user has gone and navigated to another page in the middle of downloading some asset, now the browser could say, “You know what? Keep this connection open but cancel that thing.” You know, there’s a lot of different ways that HTTP 2 is going to make web applications faster.
AIMEE: Okay. My final question. These are all things I’ve just been super interested in. So, next one, lots of people are using Babel, a lot of people are using TypeScript, some people probably are not just using native browser support since it doesn’t support everything right now for ES 6. But how much do those things affect performance?
NIK: That is a difficult question to answer.
NIK: Because it will…
AIMEE: I don’t…
NIK: It will depend on your code that you’re writing that those transcompilers are doing something with. And some of those transcompilers are, some of them have different focuses. Some of them are more focused on being absolutely as correct as possible whereas others might cheat, or not completely implement the ES6 feature, the “right way”, because it’s just not reasonable. And so, I’ll give you an example of this.
And so, if you look at six-speed on GitHub, it’s kpdecker on GitHub, six-speed is a microbenchmark. And so, whenever there’s a microbenchmark like this, be wary because it’s not necessarily real-life right? They’re doing one very small thing executing it over and over and over again in a loop and figuring out if this thing is faster or smaller. And you probably never write code that loops over an arrow function execution a million times. That’s probably not your code. So, does this matter? Maybe not. But it’s interesting from an academic perspective. And so, what he has done is gone through a bunch of different ES6 features, written the corresponding code in ES5, executed it across many different browsers including Chrome, Firefox, IE, Edge, Safari, and Node as well, and then run the ES6 implementation that he hand-wrote through all of the different transcompilers.
AJ: If ES6 is every natively implemented.
NIK: Yeah, that’s true. That’s true. But you will see a difference depending on how it is back-compiled based on your transpiler.
JAMISON: That’s a point I think that deserves even more emphasis, talking about… I guess at that point you’re getting into the hot loop of your code. There’s some part of your code that gets run more than other parts. And slow downs in that part of your code will drastically, will more strongly impact the performance of your app overall. We talked about this at work and I found another benchmark that shows similar numbers. It’s a different set of test cases and a different person doing them so they’re not exactly the same. But there are some things that are 20 times slower in Babel compiled code than the ES5 version of… so, it’s a different feature but you’re doing the same thing, kind of. And the debate was, “Well, 20 times slower sounds awful. No one wants their code to be 20 times slower.” But the other side of the debate is if it’s one microsecond and now it’s 20 microseconds, that doesn’t matter. [Chuckles]
JAMISON: That’s not going to actually make your app slower. And if you make an HTTP request that takes 200 milliseconds, then all of the micro-optimizations you got from not using the newest features are totally blown away by the 200 milliseconds that it takes for the response to come back.
NIK: Yeah, yeah.
JAMISON: So, it’s… I don’t know. You got to balance the benchmarks with what’s actually slow in your app.
NIK: You’re exactly right. I think one of the biggest difficulties for developers when talking about performance is understanding the scale of the numbers that we’re talking about. I heard a lot of arguments years ago that Bill Gates should run for president. And the reason was because he was the only person who could understand the numbers that were in the trillions and the billions that our nation has to deal with. And I was like, “Okay, that’s kind of an interesting point,” because I didn’t understand hundreds of thousands of dollars until I bought my first home. And I’m like. “Now I understand hundreds of thousands of dollars.”
NIK: Millions of dollars is beyond me. But Grace Hopper, she was very fundamental in compilers and computer science, she was in the army. Or, I don’t actually know. She was in the military of some sort, some branch, maybe the Navy. I’m not sure. Anyway, she has this amazing video that she did right about the time she’s about to retire where she tried to visualize performance. And her generals were arguing with her, were yelling at her for how long things were taking. And so, she called down to the shop where the circuits were being made and she said, “Can somebody please cut me off a nanosecond? I want to be able to show people what a nanosecond was.” And they were all kind of like, looking at her like she was crazy. “What are you talking about?” And she was like, “Well, this is easy to figure out.” We know what the speed of light is. I want to know in a piece of wire how far data or electrons can flow in a nanosecond.
And so, it’s kind of weird because it’s wire and we’re talking about the speed of light and so it doesn’t completely equate. But basically if you go with the analogy that she’s making, one nanosecond is 11.8 inches long. And so, she jokes in the video that when they call me and tell me things are slow, I hold up the piece of wire and say, “How many nanoseconds do you think are between me and the satellite where you’re bouncing that communication off of?” And so, I thought that was pretty funny. So, basically a foot is a nanosecond. If you go to a microsecond, what you were just talking about Jamison, that’s 984 feet.
And so, on the video she takes out this coil of wire that you can maybe wear around your head like a necklace or something like that. And she said, you know, I believe that any developer that’s wasting microseconds should have to wear one of these all day. I did the calculation to extend this, because we don’t even deal with microseconds usually in web apps. We’re talking about milliseconds at best. And so, one millisecond is 186 miles. That’s how far we can transfer that data. So, every time we throw away a millisecond we have to think about the scope and how big that is. And so, there’s also a funny video where she does the same thing with David Letterman. Right after she retired David Letterman had her on the show. And so, I’ll make sure we get that into the show notes.
CHUCK: So, I’m pretty famous on this show for liking things that are Rails and I like the way you started that out with RAIL. Have we talked about all four points in that?
NIK: No, we have not. So, Rails is the Response, Animation, Idle, Load. So, Load I think we covered pretty well in the last show. Basically, they come down to the recommendation that you should be loading your page with a speed index of a thousand milliseconds. And I think that they said that if it’s on mobile the speed index should be around 3,000 milliseconds.
Response we’ve loosely talked about. That’s providing feedback to the user’s gesture in a hundred milliseconds. So, if I click on a button, it needs to be depressed in a hundred milliseconds so I know that the system has received my input gesture. But you know, it’s actually funny. Maybe we have talked about all this but we’ve just danced around them and not called them out specifically.
Animation they say should hit, you should be striving for 60 frames a second and driving back to that 16-millisecond budget I mentioned earlier. And so, making sure that all of your animation is being done within 16-millisecond blocks of time, trying to break it up into something that fits that and leverage ‘request Animation Frame’.
Now the last one is Idle. And this is if you have a chunk of work that you need to do but you don’t have to do it right now, it would be nice if you could wait until the user was idle on your app. Maybe they’re reading a paragraph of text and so there’s no movement going on. They’re not scrolling or doing anything like that. Hey, you have some time where nothing else is going on to process a chunk of work. And so, what they are recommending that you do is you group that chunk of work into about 50-millisecond chunks of time. And that’s so that, 50 is a number that they’ve chosen because if you do… the user does a scroll or something like that, you want to be able to respond to them fairly quickly and not be locked up doing too much work at that point.
And so, there’s actually a new API, [inaudible] the name of it. It’s only shipping in Chrome right now. I think it’s something like ‘request idle callback’. And basically the browser will detect when the user is in one of these idle states and say, “Okay, now is a good time. Go pop some work off of whatever queue you have and do some processing. And I’ll tell you if they’re idle again.” And that way you’re not sitting there trying to manage and figure out when they’re idle. And so, Paul Lewis, aerotwist on Twitter, has a very good Udacity course on all of this stuff, particularly around RAIL. So, if you’re interested in it I’m pretty sure it’s a free course. I recommend going and checking that out.
JAMISON: ‘Request idle’ looks cool. It seems like a good time to do things like analytics, maybe collect a bunch of user data and then you send it off when they’re idle instead of sending it on every click or something like that.
NIK: Yeah actually that’s funny that you give that exact scenario. You’re going right at the heart of the type of stuff that you might do. But for that scenario there’s another new API called the Beacon API.
JAMISON: Oh, jeez.
JAMISON: Interesting. I’ve heard of Beacon but I didn’t know that’s what it was for.
NIK: Yeah. If you’ve ever had to try to write a beacon, it’s surprisingly difficult. You would think, “Oh, I’ll just Ajax some data off.” But when do you send that request? And so you might think, “Oh, I’ll do it inside of the unload event.” Okay, well most browsers actually don’t allow you to make asynchronous calls inside of unload. They just ignore that. So, now you could do a synchronous inside an unload, but that completely slows down the user’s experience. Especially if you’re sending back unimportant to the user data like analytics. Well, now you just defeated the purpose. You’ve slowed down their experience just so you can watch what they’re doing. So, other people, they’ll use like a tracking GIF or something like that. But there’s only so much data you can put on URL parameters, et cetera, et cetera. It’s actually more complicated than it seems like it should be. And so, Beacon is there to make those scenarios much more simple.
CHUCK: You know, I’ve seen memory issues. It seems like they’ve gotten better over the years. Is it still an issue or is that only on mobile devices though?
NIK: It’s certainly more of an issue…
JAMISON: Oh, I can write memory leaks in any language and [inaudible].
CHUCK: But for the most part, you have to create a lot more objects in memory on your page to get it to really hurt yourself on the desktop.
JAMISON: Not if you have a [full] loop. I mean, apps are staying open longer as people are building more and more…
CHUCK: Yeah, fair enough.
AJ: And the more objects you create, the more have to get garbage collected. And that garbage collection has to happen at some point in time. So, if you have a loop where you’re just using a temporary object once for a function, that object builds up memory until that loop exits and then it’s appropriate time for it to be garbage collected. And that does cause pauses.
CHUCK: Of course, Chuck. You can’t just be right.
JAMISON: We’ve dove too deeply though. Let’s back up.
JAMISON: Why do I care? This should all be solved for me.
NIK: Well, theoretically from an academic standpoint, if you think about what garbage collection is, this is really supposed to abstract away memory from a developer so that memory looks like it’s an infinite commodity. You just keep on consuming it and the collector is in the background hiding away the fact that you’re running out of it and it’s cleaning up things for you. Theoretically, that sounds great. But in reality, we pay a cost for that. So, obviously there are lots of languages that are not garbage collected. I am lucky to say that in my professional career of 18 years, I’ve never had to write production code that wasn’t managed. Only in university did I have to do that. And so, that makes me feel embarrassed at times but it also makes me feel very happy. [Chuckles]
CHUCK: Oh, yeah. Yeah, I’ve written some iOS code before they really pushed automatic resource counting on you, which still isn’t garbage collected but it’s mostly automatic and doesn’t write.
JAMISON: People sometimes could say you’re sheltered. But to me being sheltered sounds like a good thing because you don’t have to deal with all the rain and the hail falling on you.
CHUCK: Yeah, it’s a nice land to live in, believe in me.
JAMISON: Yeah. Yeah.
CHUCK: Malloc and free, baby.
NIK: I have a few less scars, that’s for sure.
AJ: And there’s the security aspect, too. Like, where does every root exploit in the iPhone come from?
CHUCK: Oh, that’s true.
AJ: You know? And Internet Explorer. Every, well some of those are just poor programming, but Sony exploits come from memory management. And so, it’s something that most developers aren’t equipped to do.
JAMISON: Do we, I’m sorry. Did we get a clear enough definition of what garbage collection is? Just in case people aren’t familiar.
NIK: Let me provide one.
NIK: So essentially, as you are creating new DOM elements, variables, all these different objects and things that you’re using in your language, at some point you build more and more and more and more until you no longer have space to continue to create. The workbench is full. And so, the garbage collection is an engine that will basically pause your work. It says, “Hey, you can’t do anything else on this workbench. I’m going to go through, I’m going to clean up everything that needs to be cleaned up, everything that you aren’t using.” And the way that it does that is there’s actually a tree structure that’s maintained in your code. And it walks through all of that tree structure and it marks things that are being referenced. So, your root object, your window object is referencing the navigator and the navigator is referencing X, Y, and Z. And so, it keeps track of everything that’s being referenced. And anything that does not get that check mark, like, “Check this is still being used,” will get cleaned up.
And so, this actually reminds me of, we just had thanksgiving here in the states last week. And I was the guy in charge of cooking and so there was a huge mess. There was flour all over the place and I was making gravy and cranberry sauce. And my sister-in-law would come behind me and say, “Nik, are you done with this knife? Are you done with this spatula? Are you done with this pot?” And if I said yes, she washed it and put it in the dishwasher so we could keep the counter free for me to continue to work and make more things.
JAMISON: That’s actually a good analogy. I’m going to steal that.
NIK: [Chuckles] Take it. Yours.
NIK: You just have to attribute me. So, every time after you say it, you say, “By Nik Molnar.”
JAMISON: [Laughs] Done.
Now, a little earlier we were talking about, well does memory even matter anymore? Maye it only matters on mobile. Well, we all know that mobile is more and more, that it represents more and more of the usage of our applications. So, mobile is important. Yes, it does have less memory generally. So, you’ll feel that pressure more quickly than you would on a high-powered desktop machine. But you’ll still run into memory management issues on a high-powered desktop machine as well. And so…
AJ: Well, and phones have gotten to the point where they almost have as… they in many cases have as much memory as your cheap laptops that you’re getting at Walmart.
NIK: Yeah, exactly. The bottom line is though that as we continue to get more and more power on the hardware, we’re getting more and more power in our software, right? We were just talking about the Web Animations API or there’s Web Audio, or putting video into websites left, right, and center nowadays. So, every time we get a larger counter, we find more tools and more things that we want to make and create that can consume that memory more quickly.
But the other thing to realize is you don’t get all of the memory on the machine. So, if I’m looking at my laptop and it has eight gigs of RAM in it, I don’t get eight gigs of RAM to run my website. My operating system is taking at least half off the top and then it’s sharing that across all of the different apps that I might be having between my browser and my email client and this, that, and the other. And then I’ve got, I can even count, I probably have at least 30 tabs open right now that’s trying to share memory. That’d be an interesting game, how many tabs do you have open right now? So, the memory space gets whittled down pretty quickly and very easily across applications, mobile or not.
JAMISON: So, that’s kind of laying out the problem and why it exists. What do you do about it? Do you just watch Addy Osmani’s video? Do what he says?
NIK: Just do what Addy says. I think that’s a good start.
NIK: The tooling is also built into the dev tools from the browser vendors. So, in Chrome you can run a memory profile. Basically what you usually end up doing is you will take a snapshot. It will capture the state and memory for your app at some point in time. And then you will do the thing that you think is consuming a lot of memory and take another snapshot. And then you can do a diff. And it will tell you literally like, “You had X number of new strings allocated, and this is the number of strings that were deallocated, and here’s how long it took to run garbage collection and how many times garbage collection was run, et cetera, et cetera, et cetera.” And so, you could start to track down exactly why memory is being used or where it’s coming from.
JAMISON: No, we can just post it in the show notes and then you’ll be a wizard.
JOE: Is there an easy way to tell when you have a performance problem if it’s a memory problem versus an actual performance problem?
NIK: So, memory problems can happen in two different places and that’s going to be on the server side or the client side. Typically on the server side you will see memory problems resulting in longer than normal server response times, especially if it’s intermediate. Because what ends up happening is if you run into a huge problem and garbage collection happens or the application just might get recycled entirely by IS or Apache or whatever web server you’re using that usually means that your caches are also getting flushed, the data caches that you have, not HTTP caches. And so, the next time you run that request, it’s got to go and query all the data again and things like that.
The other thing that you might want to do is at least just turn on a counter in whatever tool of choice that you have for keeping track of when the garbage collector runs. And so, the memory profilers in all different languages will do that. And so, if you see that you’re getting a lot of GCs, garbage collections running, then you probably have a memory problem. And so, usually it’ll draw a graph that shows your memory usage. And so, if your memory usages continue to go up and up and up and up, that might be a problem. But the more alarming one is when it goes up and then it falls back down and then it goes up again and it falls back down. And it kind of makes a saw tooth pattern. And that’s kind of problematic because you know that you’re continuing to hit memory pressure. And when it drops back down again on the saw tooth, that’s usually synced up with garbage collection running. And so, when you see that, memory usage isn’t the problem. Getting it collected very often typically can be.
JAMISON: I think that problem seems like it’s a bigger problem the more performance sensitive your application is, too. Like I know on games they spend a ton of effort reducing garbage collection pause times and things like that.
NIK: Yeah, exactly. Because games are extremely low latency, right? You don’t want to go to shoot the bad guy and then garbage collection happens in your root.
JAMISON: Sure. [Chuckles]
NIK: You know, I think the other thing is SPAs are very popular and they’re getting very popular for a couple of years now. But web developers haven’t necessarily had long-lived pages. You make another request, you’ve completely dumped the memory space, you download the new page and you start over again. And so, we traditionally haven’t been running into memory pressure situations. And I think that now as you’re putting more and more client, right client side functionality into the app, is when you’re really starting to see that. So, it’s kind of a new discipline for us and certainly not one that I’m an expert in. You guys could probably do a whole other show with a memory wizard and talk just about that.
AJ: So, if you are interested in learning about interesting problems and solutions in garbage collection, Golang 1.5 came out a little while ago and they seem to have found the holy grail and they’ve got a talk that I’ll link to that’s pretty cool.
JAMISON: As far as how to do garbage collection well, you mean?
AJ: Yeah, they do it multi-threaded and they make it so there’s no GC pause because they’ve round-robined between the threads. Whichever thread is allocated memory has to dedicate some time to cleaning up memory. And then there’s only a small pause to make sure that each thread has done its own work. So, it has only a very little bit of memory to go through instead of having to go through all the memory of all the threads.
JAMISON: That sounds hard. I’m glad someone else did it.
AJ: [Laughs] Yeah.
JAMISON: [Chuckles] It sounds like they did a great job.
AJ: It sounds like it.
NIK: Yeah, I have no experience there so I can’t comment. But that sounds pretty incredible. I know that Go is getting a lot of attention right now for a lot of the right reasons. So, it’ll be interesting to see how that community grows over time.
AJ: Yeah, if they were dynamic, you could load stuff on the fly, I’d be using Go instead of Node right now.
CHUCK: Alright, then. Before we get to the picks I just want to acknowledge our silver sponsors.
[This episode is sponsored by Thinkful.com. Thinkful.com is the largest community of students and mentors. They offer one-on-one mentoring, live workshops, and expert career advice. If you’re looking to build a career in frontend, backend, or full-stack development, then go check them out at Thinkful.com.]
CHUCK: Jamison, do you want to start us off with picks?
So, I’ve just been listening to, it feels more like books on tape than a podcast just because they’re so long and they’re so, they’re these connected stories. I listened to one about Xerxes the Persian emperor guy. And then there’s this series of five, it’s like 20 hours on World War I. And I always liked history in school and then I fell away from it. So, it’s been fun to learn about this stuff again. So, he’s got a bunch of them for free and then you can buy his back catalog, basically. Because the production value is high. You can tell he puts a ton of time into them. It’s not just like… yeah, so he makes money by selling old ones, basically.
My next pick is this article called ‘Empirical Programming languages’. Oh, no that’s the URL. The article is called ‘Static vs. Dynamic Language: A Literature Review’. There’s a lot of debate about if dynamic languages are better or statically typed languages are better. And there’s been a bunch of studies about the productivity and stability benefits of either one. And this article just looks at all the studies and points out some that don’t really tell you anything useful and some that seem like they tell you useful things. So…
JOE: And makes a final conclusion for us to know?
JAMISON: No. The final conclusion is a big shrug. It’s an ‘it depends’, which is…
JOE: Oh, I could have done that then.
JAMISON: Which is helpful to cut through, there’s so much rhetoric around this, right? Especially if you’re into the statically typed purely functional languages. They talk so much about how it’s just so amazing and nothing can ever go wrong and there’s no downside. And there’s some research that suggests there’s a downside to everything. Surprise. But it’s really long but it’s also really well written.
And then my next two picks are less [chuckles] they require less of your brain power. One is this guy’s Tumblr. He’s called TJ Fuller and he just makes awesome GIFs. I posted one in the Skype chat and it terrified AJ. They’re kind of these weird glitchy animated GIFs of animals.
And then the other one is Pickle Cat. It’s this WebGL HTML5 audio experiment of a cat shooting pickles with its laser eyes. Those are my picks.
CHUCK: Ooh, you know that’s quality. Pickles and cat.
JAMISON: Yup. And laser eyes.
CHUCK: And laser eyes. Of course. Alright, Aimee, do you have some picks for us?
CHUCK: Alright. Joe, what are your picks?
JOE: Oh, those guys have so many awesome picks. I don’t want to follow that show.
JAMISON: Yeah, it became…
JOE: But I will.
JAMISON: Oh, okay. [Laughs]
CHUCK: I was holding out.
JOE: [Laughs] The suspense is killing you?
CHUCK: That’s right.
JOE: Alright. I’m going to pick a song that Pink did. She did a cover of the Bohemian Rhapsody. And she did it at a live show in Australia. And it was really awesome. Strangely enough, I heard it while I was driving through Germany near Frankfurt in the middle, well not in the middle of the night, but late one night, on the way to a castle which makes the story so much more interesting, right?
NIK: Joe, I’m not familiar with that song. Can you sing a little bit for us?
JOE: You know what? I could but I’d have to charge you for it. I have a strict copyright on my voice. So no, I’m not going to bludgeon your ears with my voice. But Pink did an awesome job covering Bohemian Rhapsody. The problem is, it’s not apparently available. It’s not on Spotify or anything like that. But you can find it on YouTube, a really good recording of it from her, this live show she did in Australia. And apparently she never put it on an album as far as I can tell. I think you can buy the CD for the live show of Australia but I couldn’t find it streaming anywhere. But I really liked it. I thought it was awesome. So, that’s going to be my first pick.
And then my second pick is going to be, since we’re talking about such an erudite concept today, I’m going to pick Rich Hickey’s talk ‘Design, Composition, and Performance’. One of those talks that really every developer should watch, whether it’s a rite of passage or whatnot. It’s a great talk and absolutely illuminating. And so, that will be my second and final pick.
CHUCK: Alright. AJ, what are your picks?
AJ: Alright. So, I’m going to pick Undisclosed is a podcast that is kind of a sibling podcast to Serial which is about the case of a high school kid that got put in jail for, was convicted of committing murder without any evidence against him. And Undisclosed, they go into really, really nitty-gritty details about the case. And so, I’ve listened to all of Serial and all of Undisclosed up to this point. And it’s just been really enlightening and scary and it makes me think that if I ever am on trial for something, please hire me an infographic artist, not a lawyer.
AJ: Because people just need to see pretty picture and know that these things aren’t possible because these times overlap and therefore I’m not guilty. Oh, and by the way, there’s no evidence against me.
And then there’s a YouTube channel called Gaming Historian which is really cool. And in particular he has a video on the history of himself, how he became the Gaming Historian. And I thought it was very insightful and inspiring because he just talks about how he against opposition and people telling him, “You’re an idiot. You can’t actually be a gaming historian. That’s not a thing,” that he did it anyway and has become successful enough. And I just thought it was really cool. And that’s all I have for you today.
So, there are going to be a bunch there that you’re going to want to go to and you can buy a three-pack, a six-pack, a nine-pack, or a season pass which gets you into all of them. I’m also working with a few people to get developer leadership podcasts. I’m talking to Marcus Blankenship about that. And I’m also talking to Erik Isaksen who does the Web Platform Podcast. And we’re working on putting together a Web Components Conference. So, if you’re interested in any of that, you can go check it all out. It should be on AllRemoteConfs.com and if it’s not then I’ll update you in a future show.
Finally, I’ve gotten addicted to two games on my iPhone that are pretty similar. The first one is Clash of Clans. And I don’t know. I’ll probably be not addicted to it in another week. But for right now, I’m really enjoying it. So, I’m going to pick that. My handle on there is the same as my handle everywhere else. So you should be able to figure out who’s me.
The other one is Star Wars Commander. And it’s a lot like Clash of Clans except it’s all Star Wars themed. So, you can join the Rebellion or you can be smart like me and join the Empire. So anyway, super cool. A lot of fun. So, I’m going to pick those.
Nik, what are your picks?
NIK: I went with a bit of a holiday shopping theme, so it’s kind of a gift guide for geeks. Things that I’ve liked or that my friends have liked that I thought were cool. The first is a smart credit card. I think Coin currently is the only one shipping. I have a Coin 2.0 and it lets me switch between my AMEX and my Visa and my debit card all with carrying just one piece of plastic in my pocket.
The next is the Airhook. This was Kickstarted and it’s just about to ship. It’s this little thin piece of plastic that you, if you’re on a plane a lot like I am, you can open up a tray table, you jam this in there and you close the tray table, and it’s a stand for your phone or your tablet and a drink, because I found myself looking down at the open tray table all the time and getting neck injuries watching videos on my iPad. And so, this lets me prop it up and mount it to the back of the seat really easily.
CHUCK: I need one of those.
CHUCK: I mean, that sounds really cool.
NIK: Yeah, it’s pretty slick. It’s like $20 and I get to receive mine. I’m supposed to be getting it in a couple of weeks. But I’m pretty excited about it and the videos for it look pretty nice.
CHUCK: So, is it too late to support the Kickstarter campaign?
NIK: I think it’s too late to support the Kickstarter but you can still pre-order them on their website.
NIK: TheAirhook.com. And then the last one I was watching the thanksgiving day parade and I heard Al Roker or somebody say something that I hadn’t heard before that I thought was interesting, is there was a float by GoldieBlox, B-L-O-X, which is a children’s toy company. They make toys for, particularly targeted at young girls trying to get them interested in engineering and STEM fields. And so, the float that they had going down New York, down [inaudible] in New York City, you could actually go and buy the little toy and make that float. And that was the first time you were able to make a float that you saw in the parade. And so, GoldieBlox has this little doll and a story that goes with each little kit. And you read the story about how GoldieBlox used her engineering prowess to save the day. And then whatever little simple machine she built, you had all the tools to go and build the same thing. So, there’s like a fun little dog watcher and things like that. And so, all the little girls in my lives, my nieces and my friends’ daughters and stuff like that, that’s what they’re getting from us for the holidays. So, those are my picks.
CHUCK: Alright. Well, if people want to follow up with you or see what you’re working on, I know we asked this before, but yeah, where do they go?
NIK: Best way to do that is my blog or Twitter. My blog is Nik Codes, N-I-K codes dot com. And my Twitter is NikMD23. And since we last chatted my new Pluralsight course came out.
NIK: Yay. So, my ‘WebPage Test Deep Dive’, which we discussed a little bit in the last episode. But this is nearly three hours covering every different feature of WebPage Test. And I got a lot of good feedback on my other course after the last show. So, I would love for the audience to do the same about my new course and tell me what’s good and bad and can make the course as good as it can be. So, that’s where to follow up with me.
CHUCK: Alright. Well, thanks again for coming. It’s been a lot of fun. We’ll wrap up and we’ll catch you next week.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]