JavaScript Jabber

JavaScript Jabber is a weekly discussion about JavaScript, front-end development, community, careers, and frameworks.


Get episodes automatically


JSJ 243 Immutable.js with Lee Byron

1:05 – Introducing Lee Byron

1:55 – Immutable.js

4:35 – Modifying data and operations using Immutable.js

7:40 – Explaining Big-O notation in layman’s terms

11:30 – Internal tree structures and arrays

15:50 – Why build with Immutable.js?

23:05 – Change detection with a mutable

25:00 – Computer science history

34:35 – Other positives to using mutables

37:50 – Flux and Redux

39:50 – When should you use a mutable?

46:10 – Using Immutable.js instead of the built-in Javascript option

51:50 – Learning curves and learning materials

54:50 – Bowties


Contractor by Andrew Ball

17 Hats (Charles)

Asana (Charles)

Call of Duty Infinite Warfare (Joe)

LEGO Star Wars (Joe)

Advent of Code (Lee)


This episode is sponsored by

comments powered by Disqus


Charles:        Hey everybody and welcome to episode 241 of the JavaScript Jabber Show. This week on our panel we have Joe Eames.

Joe:               Hey everybody.

Charles:          We have Aaron Frost, pinching in for us.

Aaron: Hello.

Charles: I’m Charles Max Wood from Just a quick shout out about Devops Remote Conf and JSRemote Conf coming up in January and March. You can still get early bird tickets, but not for very long. We also have a special guest this week and that is Lee Byron.

Lee:               Hi everyone. Good to be here.

Charles:          Were you on Ruby Rogues? Is that where you were on before?

Lee:                I think so, yeah, a while ago, talking about GraphQL.

Charles:          Somebody said Immutable.js, and you’re the guy. Do you want to introduce yourself then we’ll talk about what Immutable is and why it is cool?

Lee: Sure. I work at Facebook on a team called Product Infrastructure where we build
tools, languages, libraries, those sorts of things and service for product
teams to help them build better products. In the course of my time there, I
worked on a lot of stuff including React, GraphQL and Immutable.js.

Charles: Very cool. What is Immutable.js? What does it do? What’s the payoff that the people are looking for there?

Lee: Yes. It’s a JavaScript library that implements a collection of data structures that
are really interesting. They’re called Persistent Immutable Data Structures.
And what they do is they let you describe a collection of data. There are two
major types of collections that we built in there. One is the lift which is
like an array, one side to the end with a bunch of values in it. And then we
have a map which is a keyed collection. A key what is the value.

For those two kind of abstract collections we have Persistent Immutable Implementations of them. What that means is you can create one of these collections and then rather than editing, you would set an index into an array or set a key in a JavaScript object, you actually create a new copy of that thing but the change applies. The previous versions doesn’t change and then the new version has that change applied and lots of really interesting things happen once you have this property of immutability in all the previous versions. That means by being persistently immutable that all the old versions persist after you have the new version with the change applied. There are version of Immutable that are structures with the old ones as soon as you don’t  use them anymore this start to degrade its pieces and get recycled. That’s noble we have here. But what happens when have this things, if you have the same reference to one of these collections, you know for a fact that nothing has changed about it. But if you have a JavaScript array and you called push on that, you push another object onto that array, and then you say, “Okay, I have this array at triple equals and here’s the previous array I had and it comes back through.” You have no idea if that array changed in the hood. In fact, we’ve called push on it, it had changed, and not being able to know just what triple equal is something that changed or not, it’s just you throw away all these really awesome performance techniques that you can use in your apps. The primary one being memoization where every time you want to compute something, you look at your previous input and if the new input you’ve got is the same. You just don’t do the computation instead, you take the output of that computation last time and you reuse that. It turns out that that trick is extremely helpful for building UI applications, especially when you have component like React.

Aaron: First question, I got a big list of data and say, it’s a big list of users instead of modifying one of the user’s first name, I have to create a whole new structure. Obviously that’s 100 times slower, right?

Lee: Great question. If we were to build it in the most naïve way. Let’s back up and say, “All I really want to do is have a JavaScript array and then I just want to treat it like Immutable.” In order to do that, we would have to do exactly what you just said. We wanted to change something and one of the objects in that array and in order make that change and not mutate the array, we first have to copy that whole thing. And then in that copy we could make our change and that would be really slow because every time we’re changing everything or just copying everything.

The cool thing about Persistent Immutable data structures is that while they give the appearance of being like an array, like a list, actually under the hood, they’re trees. And when you have a tree you can do this really cool thing where you recycle parts of the tree that definitely haven’t changed in an operation. If you ever had to study stuff like a bee tree or binary trees, you can think about imagine your head walking down from the top down to one of the leaf values and then imagine all that stuff that you never touched walking all the way down to that leaf value, all that stuff gets recycled and only the stuff that you touched along the path to get the value rapidly changing ends up being to get copied and in practice that ends up being a very small percent because we have very wide trees rather than very deep trees for these data structures.

Charles: It’s essentially a tree of dips?

Lee: It’s a tree of values, when you do these dips or when you’re creating new version of the Immutable data structure, you’re actually taking the old tree and then
you’re creating a new tree that’s going to have the new value in it and then as
you’re doing that, any parts of the old tree that you know for sure aren’t
going to change, you can just recycle them wholesale. No copying, you
literally just point to the same spot and memory. And then for the parts that
do change, you’re creating new branches in the tree for the part that did
change. What that gets you is when we talk about performance, we have talk
about like O of one, O of N, O of N squared, stuff like that. If you’re in the
naïve approach where you just copy everything first then you make your change,
that means that changes are O of N. You have to take side to the whole question
in mind anytime you make a change.

When you’re mutating a data directly, we say O of one, right? You only have to consider the one piece of the data that you’re changing. You don’t have to consider anything else about that collection. With Persistent Immutable data structures, it’s O of log N, you take the log of the size of that collection and that’s roughly
the amount of work you end up doing to make one of these changes.

Joe: For people that aren’t necessarily familiar with Big-O can we get a like a very
brief explanation and specifically how log N fits into that like some maybe
real terms?

Lee: Sure. When we say N, N just stands for some like maximum number of the thing that we’re talking about. Say we have an array of 100 items in it and we want to
make an Immutable copy if that array. First we’re going to call that array of
slides or something to get a copy of that array and that’s going to have to go
to that array and every one of the 100 slots in it look at that data and then
copy it over to the newer, right. It’s going to take 100 operations to do that,
we say that it’s O of N, order of n being the largest number and operations.

Then if we were just going to mutate that array, we care about one thing, we can have an array of 10 things, a million things, it wouldn’t matter, we’re only updating one piece. There we say it’s O of one rather that O of N, we don’t even care about the size of the array, we only cared about one thing that we changed. You can start to play with that N, you can do N squared means oh, if you have an array size 100, it’s going to take 100 times, 100 operations to do the work that you want to do. When we say of log N, that mean the logarithms of N to the opposite of taking the power, and for 100, that maybe something like two or three. When we talk about the Big-O Notation, we of the drop a coefficient of things, when you say something like 2N or 3N now we just say N. But you don’t practice that stuff in matters especially when you’re talking about relatively small amounts of data. When I say relatively small, I mean like a couple of thousand things in a list. The difference between 1,000 operations and 2,000 or 3,000 operations, that’s sizeable, that’s 2-3 x difference.

When you take that Computer Science terminology of the Big-O Notation, you convert it into the real world performance tuning. You have to be really careful of that. Where that place into this, we say, okay, it’s log of N. but if you remember
logarithms, logarithms have a base, is it logarithm base 2? Logarithm base 100,
those are going to have very different number that come out. With Persistent
Immutable data structure, it’s logarithm base 32. Which is a pretty big base
for a logarithm, which means you can have something like 1,000 items in a list
and with 1,000 in a list that log base 32, it’s going to get you something like
2 or 3 or 4 as opposed to a log of base 2 which might get you 10 or 20 and
we’re talking about like much, much faster.

Often times when we’re talking about these data structures we say oh yeah, updating these things is about as fast as mutating a regular array. We say it’s really close to O of one, in practice it’s O of log N. Hopefully I’m starting to paint a picture of what these things look like. The fact that you’re making a change, you’re not only making a copy of everything but you’re also doing that’s pretty fast. What these things do that’s really cool is it gets you this nice balance where you get the properties of immutability whereas if you had just copy everything. Now it gives you really awesome programming capabilities but if you do that naively, it’s really slow. By using these interesting data structures, you can get the same techniques except that you can still have the same level of performance that were used to in our mutative, imperative programming languages like JavaScript.

Joe: A couple of questions came up as you were describing that. First off, when you
were talking about the internal structures that were used as you make a new
copy when you don’t have to copy most of the elements and you only have to
change one or two.

Lee: Actually in the new tree, it encodes all of the information as to where each of the n
nodes are and that’s all it contains.

Charles: That’s right.

Joe: Another question that came up, since it’s a tree, and but you said it acts like an
array, right? We talk about the Immutable version of JavaScript’s array. It’s
implemented as a tree. Obviously arrays and trees have different performance
characteristics, does that ever matter?

Lee: Yeah, that what I was kind of talking about before. Changing something in a list, changing something in array, they’re going to be slightly different performance characteristics, and in fact Immutable.js structures in terms just raw speed on inserts will just never be mutating an array, it just can’t be done. Mutating an array is the fastest possible way of inserting something into an array like data structure.

Immutable.js gives you something that’s array-like in the sense that it has a length that has index to zero, it has index length minus one and all the index in between,
it’s an array-like in that sense. But the way that the data structures actually
built under the hood is very different from just a block of memory. In fact
most programming languages that you use, I must tell you explicitly that that’s
what they’re doing. When you say give me this array, under the hood they
themselves are probably not just allocating a block of memory, when you say
alright, give me index three and that’s giving you the third bite in that block
of memory, that’s not what they’re doing probably. They’re probably doing
something a little bit more sophisticated. JavaScript engines in particular in
all kinds of weird things, I have an array of size a billion and it’s like
alright great that was very fast because it doesn’t actually create a billion
items that just says, “Here’s an object and we’ll have a link property with a
billion.” Alright, I’ll insert index 1 million and it’s not going to create 0
to 999,999 for you, it’s just going to create just that one piece. It’s a
really different from what an array actually is, if you have a C array or an
array at any language that truly gives you an array of memory. But the trick to
all this, when we talk about arrays, we don’t care about the memory underneath
unless we’re truly bit fiddling C programmers or JavaScript or any this high
level of languages. We really just care that it has a link, it has an index 0,
it has index link minus 1 and has all the indexes in between.

And then how it’s actually implemented under the hood, lots of straight different kind of performance profiles, that characteristics that we want. With the JavaScript array, it lets us have really, really big array, it’s for free and that’s a
nice performance characteristics for JavaScript. With Immutable.js collections,
it gives you immutable data structures every time you make an edit, you make a
new data structure with the change applied, the old ones don’t change and so
you pay a little bit of performance cost for that. The hope is, we’re talking
about performance. We want to back away from micro performance benchmark,
that’s really easy to say, okay here’s what I’m going to do, I’m going to
create an array and I’m going to create a for loop. 0-1000, and I’m going to go
alright, index said zero is equal to zero, one is equal to one, 999 is equal to
999 and I’m going to time that. And I’m going to do the same thing for
Immutable.js, I’m going to create a list and then I’ll do a for loop 1-1000 set
it at zero is equal to zero, set it one, is equal to one and do the same thing.
Immutable.js is going to be slower than array, it would be short sided to say,
oh, that’s slower, I don’t understand this, I’m going to toss it off, why would
I use this to make my app slower?

Charles: Yeah. Let me interject. I’m going to be short sided, why would in heavens name would I want to do this then?

Lee: Because when we’re building apps we are building big complicated things that have all kinds of really interesting interdependencies where the performance
characteristics of the whole don’t look like the performance characteristics of
the micro at all. Making an array insert twice as fast or twice as slow is
probably not going to have a meaningful effect on almost any JavaScript
application out there. But what we do care about is taking the computations in
our apps that are the most heavy weight and trying to reduce those as much as
possible. We’re talking about UI applications, that’s primarily figuring out
like what changed in, applications, what views we need to re-rendered,
re-rendering those views and figuring out that subviews also need to
re-render. These are the things that are actually slow about our apps, not
inserting and removing from arrays.

It turns out that there’s all kinds of different techniques that we can apply to take those kinds of problems less expensive but one of my favorites is memorization, which I talked about before which is like if the world hasn’t changed, don’t do
anything. It’s basically the idea. If you say, what changed, something changed,
I saw some update but my particular piece of the world has been left unchanged,
so I’m not going to bother rejoining onto the screen. It turns out that if
anything in your data could change at any time, and figuring out what has
changed is it very expensive? Then you end up balancing this things like well
should I do the work of like digging in and try to figure whether or not
something changed or not? Or should I just like [00:17:33] the torpedoes and
re-render anyway, if you’re stuck balancing. I don’t know which one’s going to
be faster, I’ll just do the simpler thing. Or if I told you that figuring out
if something changed is just using triple equals. Here I go, well that’s easy.
If the old one, triple equals the new one, return, do nothing, bail. Otherwise
keep going and re-render ideas. You can’t do that with JavaScript arrays and
JavaScript objects. You can’t use Triple equals to figure out if they’re the
same or not because someone could’ve inserted something into there or you
could’ve done that in another part of your program. It’s really hard for the
program to figure out in a perform it way that something has changed or
something has not changed.

That is the crux of why Immutable data really interesting for building these sorts of application, UI applications where the high level stuff is expensive. And what
we hope is that in aggregate, not only two things balance out that okay,
Immutable data structures are slightly slower than pure arrays and objects but
in exchange for that I get to skip a lot of this work that’s really expensive.
What I found in using the stuff to build real things is not only does it
balance out but it swings widely in the in the favor of much faster apps.

Aaron: I was just going to say going back to your very first example of this micro
performance measuring what people are doing. The creation of the thousand link
3 versus the thousand link Immutable array, the performance creating the
structures slower with Immutabel.js but then when you need to check if
anything’s changed, Immutable is just triple equal whereas the array it’s an N
operation to go to each one, right?

Lee: That’s exactly right.

Aaron: You do that way more than they in needing the data structure itself. That’s a massive performance game.

Lee: That’s the thought. And it’s not obvious at first glance because if you’re used in
building JavaScript apps then comparing two things isn’t a thing you do all the
time, you’re not doing that more because you know if I have to walk through
this two arrays and then compare each items in the array, that’s going to be
expensive so why would I even bother. But what’s really interesting is when we
change the performance characteristics such on these techniques, all of a
sudden they’re a like oh that’s really cheap so I’m going to do it everywhere.
When something becomes easy and very fast then programmers will immediately
recognize this as a technique that we should just be like sprinkling over
everything. Once you start building apps this way they you realize like oh wow,
that part of my app got a little bit slower I traded it in O of 1 for O of log
N. but this part of my app went from, I didn’t even know what technique to use,
I just like repeating work all over that place to an O of 1 check, very, very
cheap check to figure out if I should do anything or not in the first place
before I actually go to the work.

Aaron: My brain went to the place of well I could just put a change property on my array
or my map and then I realized oh yeah the last time I tried to build the change
detection system that was no fun. If you get it automatically then it’s yes I
have to remember to do Immutable map or Immutable whatever but then I do get
the side effect for free.

Lee: Exactly. I’ve had to build one of those change tracking system as well with the dirty bit plague. It is not fun.

Charles: You forget it in one place. That’s the [00:21:29]

Lee: Or you end up with some race condition where one thing is like rendering something like okay we rendered let’s flip that dirty bit back to dirty. Then meanwhile like another party or app is like updating that thing, it’s setting the dirty bit back again. That bug occurred a couple of times and then you ended up in
just like all of a sudden the browser starts like hard looping and your fans on
your machine starts spinning and you’re like what is going on? And it’s like
this race condition between the two whether it’s balancing back and forth between
each other. That’s crazy. That’s no way to spend your time as programmer trying
to debug those kinds of things.

Aaron: I’ve seen a couple of performance that people in different framework, I think I’ve
seen this in Couple of React and Angular 2 where these are using Immutable, the
operation for a change is actually goes down to O of 1. Is that pretty much

Lee: Yeah. Exactly right.

Aaron: Which is pretty amazing.

Lee: Yup. That’s the trade that Immutable data makes for you is operating on that
collection and most let’s compare it to where we were before. The benefits
Immutable data is not a new phenomenon, it’s as old s computer sciences, the
new thing or relatively new because a lot of this data structures were
discovered and perfected over the last 10-15 years. But before what we were
saying is the change detection is so important that we’re willing to make
updates [00:23:56] that was like academia computer science for decades and it’s
no surprise that has never made it out of that academic lab into building real
stuff because the performance trade off they just weren’t balance, they weren’t
right in aggregate for the kind of things that we’re building for consumer applications. But as soon as this academic research pop up where we got this really interesting data structures that say okay, we are going to give you the
benefit that we’ve notices is beneficial in academia but we’re going to keep as
much of that initial performance characteristics of inserts and updates and
deletes as we possibly can. I think that was the game changer, that’s when you
saw this things jump out of academia and start making their way into consumer
applications and production services.

Charles:        This leads to another question we had is, is that most people they talk about Immutability, they’re talking about function programming. They’re saying well, we have all this characteristics that make functional programming the stuff and it’s so awesome and Immutability is one of those things, but it seem like Immutable.js, if you’re talking about benefits that benefit people in objectory and in the procedural end as well. Is there a connection there? Or do people just shoo onto into that box so they can ignore

Lee: The connection there is a connection from its history. I look at computer science
as having fork in the road that goes all the way back even before it was an
academic field. Where you have the difference between touring and his machine
of, instruction by instruction move left, move right, read, pull, shift, put
and illustrating that via his set of instructions you could build anything.
That was amazing and you have that become the basis of the actual electrical engineering and hardware industry behind computer science. It’s no surprise that the basic set of assembly CPU instructions that goes all the way back from the first CPUs up till the same ones we that we have in our machines today. There are sets of instructions that they read in a linear order. First to this, than to that,
then to this, that goes all the way back to touring but we have this other fork
in the road. All way back in like, I’m not sure what the decade is, I’m going
to get my history wrong. But it was like in the early 20th century, I was trying to remember his name, I think it was Alonzo Church. I might be getting my early computer sciences screwed up but the other fork in the road was Lambda calculus. It didn’t come from a background of figuring how to play these things in order, this desire to build a machine. That’s the first branch of computer science is trying to get us to was like let’s build real stuff. But Lambda calculus was from the world of logic and saying logic and what we have there can help us compute anything, we can literally build anything from this. From that early fork, if you go look back at those original papers and Lambda calculus from 1930s or 40s it looks like lists and list is of the oldest programming languages and it is based on Lambda calculus, it’s trying to be a direct implementation of Lambda calculus as possible. And from list throughout all these other fascinating functional languages and families of languages like EML and Haskell and those all find their basis in Lambda calculus whereas over entering land in our CPU instructions, we were trying to figure out where man functioning cards in the punch cards machine sucks, what else can we do and it’s like you get the first real computer languages and pretty quickly into the history computing languages you get C and the C can
compile down to that set of machine instructions and from C you get all the
family of programming languages that most practicing software engineers use
today. Everything from Java to JavaScript Python, whatever.

                     But what’s happening now I think is truly fascinating, it’s been happening over the
last 5, 10, 15 maybe years is that this two worlds are coming back together and
not just in academia because of course academics are always looking at all this
different pieces of computer science but in our jobs building consumer
applications, we’re seeing how pieces of the Alonzo Church Lambda-Calculus and
Functional Programming world of computer science can be pulled into deterring
land, C, JavaScript, imperative mutative programming model land and how
benefits from one can impact the other. They go both ways because for a very
long time there was just really no good way to build like a user interface with
functional programming languages, that was true up until like the last couple
of years. I tried some of them. They’re all kind of a pain in butt to use and I
think the first one that was truly exciting and fun to use was by this guy
named David Nolen who built Om which is a wrapper around React that it runs in
closure and closure is a list, it follows this long history line in terms of
program languages back to the Lambda calculus and functional programming and it has Immutable data structures, it has everything that you expect to in find
that functional programming language but you pulled from the consumer
application, user interface engineering world and should that this two things
actually could line up very well.

That build a bridge between these two communities of functional programmer who want to build production stuffs and production people who want to use functional techniques who are now really interesting time where there’s lot of ideas popping. To go back to the thing that you originally mentioned as I sort to get this [00:30:40] of people ay Immutable data structures they are only in functional program languages. That’s where they started from because in lambda calculus there’s just no real concept of mutating a buffer of memory because there was no
concept of buffer memory in the first place to mutate. Instead what you had is
a map, you had sets, right? You’re like okay I have the set 1-3 plus the set 4,
5, 6 equals the set 1 ,2, 3, 4, 5, 6. Didn’t change the previous sets I just
got a new set. That’s just how map works and so it made sense that lambda
calculus which came out of the world of logic and math would take on those
properties as long as they could and so the people who are building this
functional programming languages, their input is was don’t break that, right?
Figure out how you can make the data structures in the computer science under
the hood of works such that this thing retains the properties of Lambda
calculus, math and logic. Whereas in touring land the very first thing that we
got was okay, you have a take, you have a buffer of memory. That was like step
one and step two was okay now imagine you have this thing that moves around and like reads and write and shifts left and right. And so like the primitive of
the pieces of the two worlds were so different from each other that that’s why
they evolved up to one being immutable data and one being mutable data and in
those two different branches of history. But the thing that I think that is
that as we learn stuff, as we develop all kinds of new kinds of languages, new
techniques, new principles, properties and each of the two of the computer
science history, we’ve just found out that this is not a modern thing. There’s
actually cool paper that you can find that compares different points in
computer science history where people in the field of computer science and the
field of logic have stumbled upon the same idea and written paper about it but
different names and the somebody else later goes in and a this logic principle
and his computer science principle are the same thing. And the same thing
that’s happening in the two branches of computer science history where people
are pooling ideas from one side and then putting into the others. Compiler
theory is huge on deterring side. How do you make a programming language
without the perfect set of instructions for CPU that thing runs super fast. All
that knowledge and history built up on that side of computer science history
help the functional programming side that figure out how to build figuratively
awesome compilers for functional program languages such that they could
actually in a reasonable amount of time and be useful for solving regular
problems. On the functional side we get stuff like Immutable data structures
and map can filter and reduce all this really interesting concepts that make
justice much sense in the JavaScript, C, [00:33:50] and imperative mutable
environments, we can reap very similar benefits on them on that side as well.

                     Really what this boils down to is that we have two branches of history and we just
have to really careful not be fully enclosed with the one side of those,
otherwise, it’s just this whole half of the world of tool available to us that
we just would be completely ignorant of and it would be invisible to us, and
that would be sad.

Charles: The connection is basically pedigree. You talk for five minutes and I just boil in one sentence. Yeah.

Lee: That’s cool. That what you do and I do what I do.

Charles: Are there any other payoffs then besides the change detection and ease I speed of comparisons that you get from immutability.

Lee: That’s definitely the big one. That change detection being super, super fast is
definitely the big one. There is a handful of ones that you get that are side
effects of that. But one of them is as the first thing that people bump into
hen they use Immutable.js or experience with Immutable data structures in any
environment is they realize that they have up until that point for making
something change in their app was to mutate the thing and all of a sudden they
realize that all of the ways that they structure the apps before don’t fit
anymore and they have to rethink things. As soon as they do that, they find
themselves pushing all of the mutation, all of the points where things can
change to one place in their application which can feel wrong from the sense of
decoupling and making sure that each part of your application is well isolated
modular. There’s definitely ways in maintaining modularity and still have this
property. There’s one place that things can change. But the really awesome
thing that comes from that once you get there is that if there’s some bug in
your system where something is changing in a way that you didn’t expect.
There’s one place you go looking for that thing, right? And this entire class
of problem, of race condition, what’s a race condition. A race condition is
where two effects on the world can happen out of the order from each other,
right? And it’s hard to figure out how that’s happening. That entire class of
problems is dramatically reduced when you have a single through which all edits
need to flow because you can just say okay, I’m just going to log every time
and edit it through here. I’m just going to look at the order of the things
that are coming out. You can’t have two completely isolated parts of your
application mutating things at the same time [00:36:47] to one another. It
doesn’t happen anymore. It can’t happen because of the nature of immutability,
you can’t just change that thing. You got to go figure out who owns it and then
say, here’s the change I’d like to apply to the part of the app that owns that
thing. And then what the app carry on from there. I found that in the apps
where we gone full force with this technique that the bug tracking and fixing
process ends up being much faster than in many of the apps that I built before
just because of the structure of applications makes it easier to figure those
things out and people stepping on each other’s toes less often.

Joe: Is that more because of the Flux Redux structure because part of just use the immutable data but not necessarily have in mind say I was completely have no idea what Flux Redux is like and I just started decided using immutable data. Will I see the same pass or is it a combination of the two that actually is the pay off?

Lee: I think you found the right keyword for the JavaScript audience, the Flux and Redux. That is the pattern that makes this possible but the thing that you find out is that pattern at least the simplified version of that Flux or Redux pattern is inevitable.

Joe: Sort of naturally emergent?

Lee: It is really emergent. Exactly. There is more truly no other way to do it. You can’t have multi directional data flow because immutable data only goes one way. Even if you had no idea what Flux or Redux were but did know what immutable data was and you went off to build an application, you would end up rebuilding some version of Redux. I think Redux is really nice version of that idea but there’s definitely a thousand and one different ways that you could build something like Redux to work with the immutable data and actually some of the apps that we build internally, we don’t use Redux, we don’t use the specific version of Flux that Facebook’s talked about before. We’ve actually built something that’s much
simpler that we know works super well with Immutable.js and that work
particularly with the [00:39:04] that particular apps that we are building.
They just kind of getting built into the architecture of the app, but it is
that pattern that’s exactly the same idea of one way data flow that kind of
Redux reducer model. That’s the key in sight in letting you modularize you code
having different operations leave in different file. They’re not being coupled
with each other but still having one place in the app where all those pieces and
get knitted together in a nice bundle.

Charles: We’re heading down that road where we’re getting a little more concrete, right? You start doing this, you start seeing these effects and it’s sort converges on this kind of an implementation. My question is, on a new apps should people just pull in immutable and something like Redux or should they valuate the site if they should put it in and how do they do that? the second question has more to do with existing apps where I see that this might fail.

Lee: Cool. I will tackle the first one first. Should you evaluate before you jump in? Absolutely. I happen to take that immutable data is a great solution to apps that have the problem of expensive render loops where optimizing that render loop is something that you really want to do. If you’re building an app in you’re starting in clean slate, new file, open it up, try to figure to do next. If you envision your mind that you’re going to get to the point where that’s going to be important to you, then you maybe you start off with immutable. Especially since the application
structure is going to dramatically shift towards a more Flux Redux style, if
you do that the first place. If you don’t foresee hitting those performance
problems first then maybe don’t buy yourself that additional mental overhead of
having you think way first. However I do think that it’s worth considering the
architectural concepts on their [00:41:17] alone even without pulling in the
library. Even without pulling in Immutable.js or even without pulling in Redux
I think it’s still worth considering immutability single direction data flow,
more of a functional style to your application building from step one. Because
if you do that them if you later find out that oh, if this piece of my app had
been immutable then I could reap that change detection benefit here and I would
solve the massive performance problem and it’s good thing that I made this
architectural of decision early because doing that should be pretty easy to do
and I can do that in a couple of this. More is if you had built your app from
day one with mutation in two-way data binding in its core and then you later
find out oh men, only if I could memoize here. Crap, I need to re-architect my
entire app. Then that would be really painful. That’s kind of leads in the second question. What about in my existing app, if I have an existing app should I fold this ideas and how do I even do that?

Charles:        Especially on larger apps where there’s more complexity in it’s going to have more overhead to put it in. If you’re early, your costs are less. If you’re highly invested in this application, those a lot of stuff it is costly to fold it in.

Lee: That’s right. I think especially for something that’s big and complicated, the most important thing is to figure out what problem you’re solving. If your applications’ happy and its performance is reasonable, go work on something else. Like go at out of feature, go fix a bug. If there some part of your application that’s just like critically performance intensive and it’s just really slow to render or whatever. There’s a deep rooted performance problem or there’s some other architectural problem that you can boil down to the fact that mutability at course is the problem. You’re seeing a lot of risk condition is another variant, another one that haven’t
mentioned yet that’s worth mentioning is you wanted to be able to do an undo
stack. I did A, then I did B, then I did C, and woops I didn’t want to do C, go
back to A. Woops I didn’t want to do B, go back to A. Doing that with immutable
data is really easy because the data didn’t change, you just keep the pointer
to the version of the world before you did the change. If you didn’t like what
you ended up with, just go back to where you were before. It’s extremely and
expensive to do that. If you’ve ever had to build an undo system, for piece of
a software report, have to do that before it’s pretty crazy to think about
everything like inverse operation and you have to worry about all the dependent
effects that something could have make sure you account on those, if you’re
doing something. It can get really complicated really fast and the first time
that I saw undo implemented with immutable data, my mind kind of melted out of
my ears a little bit just because it was so amazing. I think if you’re faced
with one of these pre problems that are the easiest or the most notably solved
by immutability, undo stacks. If you’re building a big complicated app and
you’ve identify with these problems, the trick is go on with the scalpel first.
Try to figure out what’s the smallest part of the app that you can change to
second fold this ideas in. Because re-building your app in the ground up with a
new architecture is a hero-ing task.

Charles: It’s so much fun.

Lee: It is fun.

Aaron: I got a question for you.

Lee: Sure.

Aaron: There are immutable operations on the built in JavaScript array, right?

Lee: There are.

Aaron: Instead of using Immutable.js why would I not want to use the Immutable just tree and array immutable use just immutable operations?

Lee: Great question. We talked a little bit about this in the beginning of the talk here. The main one is the performance cost of that. If you think about one of the immutable operation on JavaScript array is slice. If I want to take a chunk of that array, I can slice it. It doesn’t do anything in the original array, I can then take my slice and I can change and haven’t affect the original array. That is in fact in the
middle operation but slice costs you the creation of a new array and then it
cost you copying in the values every item in that array. One of the techniques
that I’ve seen used before for implementing an immutable of the JavaScript array
is for call slice on the whole thing so you get a slice of the whole and that
copies the whole thing. And then do whatever you wanted to do with it.
Whatever, I wanted to update index 3, go do that. Or I wanted to reverse it, go
do that. But of course every time you call slice, you were cutting the entire
array and then you’re going through every item in that array and you’re copying
over from the original array and so if you have an array of 10 things, that’s
probably cheap. That’s fine. That’s going to be fast but if have an array of a
thousand things or a million things, that’s going to take a while. It’s going
to add up and so that’s inherent difference between naïve approach to
immutability with existing data structures versus a new data structure to solve
the problem.

Aaron: If it’s an array of objects isn’t it the same thing does it clone each object or does it just simply create a second pointer of the same object?

Lee: It’s just simply create a second pointer to the same object, and the same goes for Immutable.js collections is that when you get a new version of the list back when your change apply, every item in that list is just a reference to the same thing.

Aaron: Now with Immutable.js, you also have an immutable object that corresponds with immutable list, right?

Lee: Yup.

Aaron: If you’re using immutable objects, then at the same time when you want to alright, really all I wanted to do is change like the name of one of the users on the list the I’ve created a new copy of that user object in addition to the whole new list that has the new copy of the new user object.

Lee: That’s correct. You go all the way down in your data structure to the thing that you want to change, you change that thing so you get a new version of it back and then you got a new version of that thing that you need to put into that containing list or whatever and so you also have to make a copy of that containing list which contains the new thing and then now you have a new version of the list with the new version of your user object that’s name changed within it.

Aaron: Does that end up as being two operations using Immutable.js? Do you have to do does two things separately or is it down together?

Lee: There is one operation that you can do because that’s a pretty common one. There’s an operation called set in where you give it a key path so you day at index 3 and then at the key name, I would like this value, Fred and then you get an object whose name is Fred that index 3 and your original collections so that all happens in one code.

Aaron: Does that create a new list?

Lee: Yep. It creates a new version of everything that it touches down the way. So you could say I want an array of user objects and the user object had itself in array and then it had an array of pets and you wanted to change in my third users second pet’s name change to something. Okay we’d have to take a new pet and then a new list of pets and then new user since I have new pets and then a new list of users.

Aaron: But if I’m using immutable operations on the built in array, then that’s what? How does that get complicated?

Lee: If you’re just doing like built in object I array as in JavaScript that’s definitely not line of codes. First you got to find that thing that you want, copy it, make the edit and the you have loop around and figure out where was that and originally copy that thing edit and you have to loop backward so you could probably write a function that did that for you to make it long line of code. That’s all a library is, making complicated stuff turn into one line of codes. You would end up copying
everything. Everything along that path enough getting copied and then the naïve
JavaScript approach.

Aaron: Immutable.js really gives you two big buyouts. One would be the performance of doing this immutable operations, the built in stuff just wasn’t really meant built with this in mind and then the second would be the nice idea of the API.

Lee: Yup. You get a lot of really nice API tools that bring a lot of ideas from the functional programming world that has been dealing with immutable data structures for a very long time. It come up with lots of really awesome tools and techniques for working with them but they’re kind of rephrased in terms of method names that JavaScript engineers are more familiar with.

Aaron: Gotcha. Cool. I think that’s all my questions.

Lee: Awesome. Hope I didn’t ramble on for way too long.

Aaron: I just said [00:51:47] the whole time that it was really good points.

Joe: What would you say the learning curve is like picking up Immutable.js. Say that you have decent reasonable JavaScript and I understand the immutables operations well enough and you decided I want to this whole immutable and give it a try. What’s the learning curve like?

Lee: I think there is two jumps on the learning curve. For the most part I found it’s pretty easy because most people don’t have to dig in under the hood and figure out what this things are actually doing, they’re just using them. The first jump is just like grappling with the idea of immutability in the first place like understanding like oh great you just have to continue to check that initial instincts to mutate data every time you encounter with this. Right, I can’t do that. Okay, how do I restructure things again to make sure that this is happening immutably. After a little bit that becomes second nature you get over that pretty easily.

                    The second one is that more thinking architecturally making sure that your entire all has single directional data flow, it’s much easier to understand how like oh right, I have a list, I want to push on to that, that returns a new list. I got to make that I use that. That’s pretty easy for people that wrap their head around. Getting to point where you’re more comfortable with not juts Flux and Redux but like
understanding that why Flux and Redux work the way they do and not just using
those libraries but using the architectural techniques that they provide
everywhere in your application. That’s really the secret to getting the most
out of the middle data structures and that’s probably the second learning curve.

                    The third learning curve is not for the faint of heart which is actually diving into the code base and contributing to it, making performance improvements stuff like that. Very few people end up there though. And that’s fine, that’s how it’s supposed to be.

Aaron: Right. What about like if there’s a documentation, how good is the documentation that exist for Immutable.js are there other learning materials that are out there that people should know about?

Lee: There’s a bunch of learning material that are out there that you can go googling for. Most of them are in the form of write up some blogs that I think are really helpful. The Immutable.js docs themselves they’re a reference docs. They’ll go through all the methods, what they do, how
they use them. The arguments that you can provide, stuff like that. The
official docs have some instances of examples uses but not as much as I‘d like
so it’s one area that I’m hoping to see are official docs improved is more juts
like hey, I want to do ABC how do I do that? And just have a lot of usage
examples. That’s where I think blogs examples have filled in the blanks but
we’ll get there.

Joe: Awesome. Finally, I do have one more question for you.

Lee: Sure.

Joe: My wife told me that I should wear more bow ties. Is that something you can help me down with?

Lee: Yeah. Bowties are cool. For anyone who’s seen me do a conference talk before have probably noticed that I usually bowtie when I do talks. Which started as almost like a joke. I actually love bowties. I wear bow ties to every formal event and one of my co-worker asked me before I did one of my first conference talks, 2014 or 2014 in awhile, he asked me if I would wear a bowtie at my talk since he always saw me wear bow ties at like events and parties. And I was like, ha! That would be funny and he’s like yeah, haha, be funny. Actually it would be a cool idea, I’ll just do it, and then I had so many reactions out of it. People like I love your bowtie, this is great, nobody ever wears hoodies at this conferences. It’s cool to see someone dress up a little bit. So I was like. That was kind of cool. Now every time I do a talk, I wear one of my bowties. And of course I really like bow ties. I collect a bunch of bowties to wear at more events than just doing conference docs. One of my favorite shops is called Knotty Co they have lots and lots of really cool bowties, one of my favorite shops. And I have some others but I have to keep them secret.

Charles: Got to keep them secret why?

Lee: Because all your listeners would buy them out.

Joe: It’s good. It’s like when you know where to catch the good fish, you don’t tell anyone. I get it, it makes sense.

Lee: But I gave you my best source, Knotty Co is definitely the best one.

Charles: Alright. Should we go and do some picks? I’m going to make Joe start.

Joe: You want me to start?

Charles: Yes.

Aaron: Can I go second?

Charles: Sure.

Joe: You go first, Chuck.

Charles: You want me to go first?

Aaron: I got a pick. My pick is a book, it’s called The Contractor. It’s book one in the Contractor’s books series and it was a cool read. It’s about earth being invaded and there’s just [00:57:13] around to save people and it’s kind of a weird magic story. But that was really cool. I really like it, I couldn’t stop when I started. The Contractor, it’s a good book. That’s my pick.

Charles: Alright. I’ll go ahead and jump on with some picks. I have a new
system I’m trying out and if you’re running a business you’re looking for a
CRM, so far it’s pretty promising. It’s called 17 Hats. and I’ll put
a link in the show notes with referral code but basically. It allows you to
keep track with relationships with people and work flows and things like that.
For me it’s particularly helpful because I’m currently using a Trello board to
keep track of progress and who I need to keep in touch with for sponsorships
and speaking at the remote conferences and stuff like that. And something like
this where I just have a workflow so it’s hey I reached out to this person about
speaking and I just have them in the system and then the workflow reminds me
okay, it’s time to follow up again or it’s time to send an invoice or it’s time
talk to [00:58:19], it’s time to get their information so they can speak, or
what have you. Anyway, that does the kind of assistance that I’ve been looking
for so I’m definitely checking that out.

                    And then the other system that I’ve been using and I’ve been using this with some of my sub-contractors or some of the podcast stuff is a Sona and there are a couple of things I really like about Sona. One is, is that I’ve actually created a template projects for some of the checklist I have and then what I do is I just copy the project and I have a project. It’s all set up for the next remote conference so that I can just work down the list and that’s really, really handy, the other bit info on that is that it integrates with Zapier. And Slack and so it does a whole bunch of stuff just that way kind of automatically. It also integrates with the things like
calendars and stuff like that but I haven’t done much with that yet. However
since it integrates with Zapier I can tie it with pretty much anything else.
I’m really, really digging it and it’s working really well. Joe. Do you have
some picks for us?

Joe: So I would like to pick a video game I’ve been playing recently. I’ve picked up Call of Duty in for that Warfare on the steam Black Friday week sale. And it’s been awesome. It’s been an amazing game but what’s amazing that is a single player. I understand it’s multiplayers and it’s great I don’t even bother playing it multiplayer. But for some reason I played a couple of these Call of Duty games their single player campaigns are just so fantastic like they should be movies. The story is so interesting and so well written. I’ve really enjoyed that. I highly recommend it, Call of Duty in for that Warfare. And I also want to pick Lego Star Wars sets. Not like play a game, I’m talking about the actual physically go to the store and you buy the set and put it together. My son’s 12, he loves Star Wars, I love Star Wars so recently I bought quite a few of this Lego Star Wars sets. I even bought one for my daughter because she was way [01:00:23] we put it together and it’s been really fun and I do it with my kids and just I’ve really enjoyed the time and enjoyed having them so that’s going to be my second and final pick. It’s Lego
Star Wars.

Charles: Alrighty. Lee, what are your picks?

Lee: I already picked Knotty Co for your bowtie needs. Let’s see. Another one is Advent of Code. I don’t know if you’ve seen this before it’s like a puzzle site but it’s like an Advent Board so it works its way up to Christmas Day and everyday there’s a couple of puzzles that you solve and it’s actually hilarious like each puzzle, they look like tiny computer science puzzled that you should be able to solve like a half hour or so but they all have this storyline where they Santa’s core technologies has been stolen by the Easter Bunny and you have to send out on a mission to steal it back and that survive. It keeps throwing stuff in your way that you have to solve. It’s very cute. It’s funny and the puzzles are fast enough to solve that it doesn’t take very long but long enough that that it holds you interest for a while. I don’t know. I’ve been going to a couple of them and they’re a lot of

Charles: If people want to follow up with you, see what you’re working on, follow you on Twitter, reach your blog, whatever. What do they do?

Lee: Definitely follow me on twitter. I tweet reasonably often and I usually tweet about the stuff that I’m working on. Follow me on Twitter on @LeeBee on Twitter.

Charles: Alright. We’ll go ahead and wrap this show up. Thank you for coming Lee.

Joe: Thanks Lee.

Lee: Yeah. My pleasure.