AJ: Yo, yo, yo, coming at you live from Pleasant Grove.
CHUCK: Aimee Knight.
CHUCK: I’m Charles Max Wood from DevChat.TV. I want to just quickly mention that I’m doing another remote conference on Angular. So if you’re interested in that, go to AngularRemoteConf.com. Call for Proposals is open and early bird tickets are also available until the 25th. So go check it out. We have two special guests this week. We have Forrest Norvell.
FORREST: Yep, that’s me. I’m coming at everybody from the outside lands of San Francisco.
CHUCK: And Rebecca Turner.
REBECCA: Hi there. I’m here from Cambridge, Massachusetts.
CHUCK: Awesome. So rumor is that npm3 is in danger of coming out or is out?
FORREST: Yeah, it is in Beta right now. We will get into the whys and the wherefores of why it’s not in general release in just a little bit but we’re still working out some remaining blocking issues is the short version of that story. And we have a lot of interesting new stuff in it to talk about that mostly my colleague Rebecca will be addressing.
I thought I’d start by just talking about why nmp3 exists really briefly. The npm installer is the core of the product and it’s grown organically over the years and it’s the source of a lot of the issues that people have with npm. Rebecca has a good story of how we changed it to fix that.
But the biggest change that we needed to make was that before, the installer was doing stuff as it went so you’ve pulled in some package that needed to install. It read their package.json files and it’ll be like, “Okay, I need to pull down some more stuff.” So it’d be doing this mixture of pulling data off the registry, unpacking tarballs, setting up symbolic links all at the same time. So there are lots of weird risk conditions. There was a lot of stuff going on.
Basically, what we needed was we needed a way to basically build the whole dependency tree and figure out what needed to happen before doing it so that we’d have a better idea of what needed to happen. The biggest feature around this is that before when you run npm installer in a previously installed tree, all npm would basically do is look and see at the top of all packages look vaguely plausible according to the package.json file. And it’ll be like, “Whoa! All done! I don’t have any more work to do.” Even that stuff that changed further down the tree or there were other inconsistencies like there were packages missing further down the tree, you’d basically have to blow away your node modules folder and to start over again if you got an inconsistent state.
There’s a bunch more stuff there that came up in the process of reworking things for npm3 and I’ll let Rebecca get to that. But the basic motivator was we wanted something that was more consistent and robust when it came to actually getting the things you want to be on disc consistently every time. So I’ll pass it up to Rebecca now.
REBECCA: Yeah, so like Forrest says the motivator from npm2’s giant tangle of execution, npm3 is much more disciplined and it does one thing at a time. We were calling it the multistage installer for a long time and the reason for that is because it scans your folder and figures out what you currently have. And then it figures out what it wants you to have. It goes through and resolves all your dependencies without actually installing anything. And it goes through each install step separately.
This is where one of the big wins is that previously we have the postinstall life cycle that runs after a module installs and previously that would run. That usually worked but there was no guarantee that all of your dependencies had completed installing by the time your postinstall life cycle is running. And so if you needed those for that to work, you couldn’t rely on it. It didn’t run consistently. Npm3 because it’s running everything in specific order and because it can do things like sort the order that it’s going to run, install life cycles based on dependencies. It can guarantee that unless you have circular dependencies, all of your dependencies will be installed before we run your life cycles. And so that’s opened up. It’s a feature that was always there but it was a feature that wasn’t safe to use until we have this.
CHUCK: So can I just clarify really quickly? So essentially, what it’s doing is it’s building a dependency tree and then figuring out the best way to install it and then installing it that way?
REBECCA: That’s right. I mean, it builds up a tree of what you’ve got and then it clones that and mutates it based on the package.json or shrinkwrap and so it produces this version of what it wants your tree to look like. Then it produces a list of actions to mutate first tree into the second tree. Then it just runs through those actions in a deterministic order.
CHUCK: That’s interesting.
REBECCA: So that makes it much more predictable and it’s vastly easier to debug now. It can print out state. If you turn on the silly level of debugging which is the level that we use, you get things like here’s the tree before and here’s the tree after and here’s all the actions we’re going to take to change it from one to the other. So for us, there are whole classes of bugs that before were just it means trial and error that we can now attack methodically.
CHUCK: So besides your own debugging, are there user issues this solves?
REBECCA: Well, like I said the life cycle ordering is really important because it was like you could use that until it stopped working for you and then there was nothing we could do for you before.
REBECCA: So I think that was really important. Interestingly, this building it out is how we ended up with like we produce a flat tree or a mostly flat tree now and that came out of going we’d really like to deduplicate our modules when we’re building this tree. It turns out this is we knew. We both wanted to have a maximally flat tree and we wanted deduplication. They go really well together. It’s actually much easier to do the deduplication if you already have a maximally flat tree. So those that didn’t come out of the multi-stage part of it but they came out of it, they’ve been a key part of npm3 as long as we’ve been working on it.
AJ: So to the point of flattening, I’ve noticed that there are some people that do bad things.
AJ: They expose modules and singletons with variables that can change internally. I imagine there aren’t too many things out there that depend on these modules but passport is one of them and lots of stuff depends on that, right?
AJ: What do you see happening with stuff like that? Is there going to be breakage or are people going to clean it up? What’s going to happen?
REBECCA: That’s one of the main reasons that it’s in Beta is that we want to get people an opportunity to clean these things up and to reach out to projects that are affected by the changes that this brings.
The thing is that like any module that was relying on that Singleton Pattern could already break because if you run npm-dedupe, it could produce the same kinds of scenarios that a flat tree can produce. It’s just that the flat tree is going to produce them a lot more often. And there are ways of doing it singleton that don’t have that problem so it’s actually not a difficult change to introduce into a project like that. So I think they’ll get fixed.
FORREST: Oh, that touches on another one of the major changes that comes in with npm3 which is a change that we made after an epically long issue thread about a year and a quarter ago about the ongoing problem of Peer Dependencies.
Peer Dependencies were originally envisioned as a way to create a plugin hosting mechanism which begs the question of why they were ever called Peer Dependencies because the name doesn’t really describe what they do very well. It’s just the enduring mystery that is Peer Dependencies altogether. But for this we basically realized that Peer Dependencies cause the same kind of dependency help problems that npm was originally intended to address by allowing each application to have its own package tree.
So we made two key decisions. One of which was that Peer Dependencies will never cause a package to be installed with npm3 [ingrader]. It also takes the error message you use to get when you had an irreconcilable set of Peer Dependency conflicts because you had 18 Grunt plugins and 16 of them were all mostly compatible but two them wanted things that just couldn’t be satisfied. It turns out error into a warning. It puts the burden instead of being on the people who just want to use these plugin hosts and their plugins and makes that the problem of the person who is actually maintaining the plugin or the plugin host.
CHUCK: Can I stop you real quick?
CHUCK: So Peer Dependencies, what exactly are they because you said they’re kind of plugins but …
FORREST: So Peer Dependencies – the feature is merely a versioning constraint, right? So you say I am Grunt OptiPNG so I want to have this thing to fix up the PNG and magis in my website and it needs Grunt to run. So the plugin says I have a Peer Dependency on Grunt which is to say I need Grunt installed in my package tree in order for this plugin to run, right? It’s not self-sufficient on its own. And the idea is that you have a whole bunch of these plugins. They’re all like I need Grunt version 0.4 in order to do my job.
Then previously what would happen is you didn’t need to depend on Grunt directly. It would be like, oh there’s a Peer Dependency relationship for this plugin so I need to go and had an install Grunt at the same level as the plugin. And like I said that caused problems because you would have a bunch of these plugins and just do the different ways that people use SemVer or things getting out of date. You get in situations where you as a user would need let’s just say three plugins and the way that the Peer Dependencies works fast using SemVer meant that there was no way to actually satisfy that constraint.
FORREST: So again the feature is used so that people can have a bunch of plugins for things like Grunt or Gulp or Karma. The way that it’s implemented is it was this version Chuck that also did an implicit install. And because of the really awkward combination of having the thing automatically installed sometimes and then also grinding to a halt when all the plugin Peer Dependency versions didn’t match up, then what ended up happening was that you just couldn’t do anything about this user. You got stuck and that create a lot of support problems not just for us at the npm team but also for Grunt and Gulp and the other people who are maintaining all of these plugins.
So by changing that to a warning what we’re really saying is two things. One is it makes it easier for you as a person using this stuff to figure out the various SemVer complex and try to come up with the version of Grunt or whatever that will actually satisfy them. Also, more often than not the likelihood is that your SemVer, the SemVer of the plugins is probably overly strict and nothing bad is going to happen if you actually go outside the envelope a little bit here. So you’ll still get the warning saying, hey we can’t satisfy this constraint but the odds are good that you will still be able to run whatever life cycles scripts or do whatever work you need to do with that system and that plugin and the architecture. Does that make sense?
CHUCK: Yes, so essentially instead of saying I can’t give you Grunt version 1, 2 and 3. It now just says we installed version 3. Some of these said they want version 1 and it might not work.
FORREST: Yeah. And to go back to AJ’s original question when you have this kind of Singleton access pattern quite frequently that’s for things like Ember or Angular or React that actually quite frequently have their own kind of dependency management pattern. Either they’re using something like Babel to transpile or they’re using like Angular basically has its own dependency injection pattern that uses globals. It doesn’t really touch on the common JS pattern used by node modules at all.
So really all they’re doing is expressing those constraints between plugins altogether. I think more often than not, the kind of Singleton Pattern even though it’s an antipattern will work just as well now that we have this mostly flat tree. In the cases were it doesn’t, the same thing obtains for both Peer Dependencies and things like the larger ecosystems that they’re using this kind of Singleton Pattern. What we want to do is we want to find out about that now and see if they can come up with the mitigation strategy on their end or if there’s just something that we have to do to enable those ecosystems to continue to work without breaking all the users.
REBECCA: I’m not quite sure what you mean by the first point. The Rewrite changed it structurally but the tools that we’re using haven’t changed. But there’re architectural changes changing it to be step by step.
Probably the best example of this is deduplication which then in turn led to the max flat trees because that was the easiest way to implement that. So that was something that had been sitting around as like Windows would really like to have shorter paths as long paths beyond 255 characters Windows can’t deal with that. And so that had been sitting around as an issue and we knew we wanted to address it but wasn’t a part of the original plan. It came out as we were working on it that it was just going to be the most straightforward way to approach the project.
Forrest, can you think of anything else that has come out of the architectural changes?
FORREST: Not so much architectural changes but we did change our coding style a little bit. To go back to your original question Aimee, npm is pretty much inextricably tied to the development of node as a platform. There is a fun story that I think we’ve told actually on this podcast before that npm has never actually stood for node package manager and in fact it doesn’t really stand for package manager at all. But the two projects grew up together.
So there’s kind of two pieces to that. Rebecca’s put a lot of work into making the as she mentioned she’s about to pull out work into making the code base easier to understand and debug. But we also took that opportunity to solve a major procedural point for the project which was the code reviews we’re spending a lot of time focusing on coding style and other things that are frankly not a super great use of the team’s time.
So we made a decision to switch to using the probably at least slightly ironically named Standard Style Checker. So that means that basically we just ensure that all the code made it through there without raising any issues and then made that part of our process. Partially this is done to make things easier for the CLI team which is Rebecca, me and also Kat Marchan who’s the most recent person to join. But also part of it was kind of this feeling I had that it was easier for people to contribute to the project which is one of our big priorities is making it as easy as possible for outsiders to get involved, but if they knew what to expect in terms of coding style and coding standards.
Architecturally, the most interesting thing to know is I think originally Rebecca and I both thought that this project was probably going to take about two or three months.
REBECCA: That is true.
FORREST: Isaac had some ideas about how the multi-stage installer was supposed to work. I had to have spent some time thinking about what was going to happen. We very purposefully did not dump all that information on Rebecca because she was fairly new to the company at the time. We wanted give her an opportunity to figure this project out on her own.
I think we realized along the way that there were a lot of codified assumptions in there and there was more to it than we thought. I think the entire process has been us figuring out that the installer’s actually really complicated thing.
I see that Aimee just dropped a link in the chat about the presentation that our colleague, CJ Silverio who runs the registry team did. And that is actually a very interesting story.
The Registry Team and the COI Team work more or less independently from each other. The back end work is more or less isolated from what…. like Rebecca deals with on a day to day basis. The only communication layer is an API that is under documented but well understood that works through a package that npm uses called npm registry client.
So yes, the registry itself has gone through enormous changes in the last year. In fact, I think basically the entire architecture has now been at least radically modified if not completely replaced twice. I am not the best placed person to talk about those changes but the interesting thing there is that the COI has had to change very little. And in fact several of the times when there have been changes made to the COI, they’ve either been just to add support for new features on the packing on the registry or they have been made to deal with the fact that the behavior that we’ve all come to agree after the talk were more or less regressions.
So I think that’s actually a pretty interesting engineering success for the team that we can be making these huge changes in the backend without having to affect the frontend and vice-versa. But that has actually how that’s gone. I have not had to do too much in managing the process to make sure that we don’t step on each other. It’s actually been that the two teams would work pretty smoothly and more or less independently.
Although there is some work that’s going on that I’ll touch on at the end of the conversation about where we go next with the COI that does impact that. But yeah, the work that CJ has done actually has had very little effect on Rebecca’s work.
REBECCA: One of the interesting things about how the scope of npm3 change is it started as let’s rewrite the installer. It turns out like when you dig in to the deep internals of npm, it is largely built on itself. So anytime you change one piece of it internally ended up changing a bunch of other things. And so a bunch of other commands got rewritten as part of the installer rewrite.
Like the npm shrinkwrap command is an entirely new set of code. Npm ls had to be changed substantially, npm rm of course and anything that modifies the tree that was obvious from the start. But a lot of this came out because flattening the tree seemed like the right approach to take and we knew we needed that for windows we went with it. That impacted out npm I think more than it impacted our users.
CHUCK: I want to bring up another thing that you mentioned both in the emails that we sent back and forth before the show and today during the show, and that’s shrinkwrapping. Now, when I use npm, I generally just do the npm install, magic enthuse. I get what I need and it just goes on. What is shrinkwrapping and what changed?
REBECCA: So shrinkwrapping allows you to lock in a specific version. So your package.json would continue to have a SemVer constrain but you’d have an additional file called npm shrinkwrap.json and you don’t have to edit that file. That file is generated by the npm shrinkwrap command. And that has the exact versions not only of your dependencies but all of the things that your dependencies rely on all the way down the tree. In that way you can get a repeatable install. Somebody else running npm install with a shrinkwrap file will get what you had. A lot of teams like this for collaboration. Sometimes it’s used for distribution.
The big change with npm3 is that if you have a shrinkwrap file and you run npminstall—save, it’ll update the a shrinkwrap file. Previously, it would only update your package.json which was confusing because the way npm implement shrinkwrap, it uses the shrinkwrapping preference to the package.json. So if you did an npminstall—save, it would add it to package.json and then that would be ignored for anyone else using your package.
So having it update that with –save was important. It also does it when you do update, when you do rm. And another change is there’s actually npm dedupe–save will now update your shrinkwrap as well.
FORREST: There’s a tiny little elephant in the room like a miniature elephant in the room here which is that if you’re coming from another ecosystem like Rails, this is a feature that you are accustomed to having out of the box, right? If you are coming out of Rails, you’ve got a Gemfile.lock file that is created as part of setting up your vendor plugins and the various other components of your Rails app.
So you don’t need to deal with all the cumbersome like manual bookkeeping of like booking your shrinkwrap file or bringing it back. And it basically ends up working in effect very similarly to how I’ve locked file work with Rails. So if you bring down the dependency either through installation or through cloning it from a Git repo and you run npm install, it just notices that the shrinkwrap file is there and make sure that the exact versions of all those dependencies are installed just the way you want.
That said, to keep addressing the small elephant in the room, IAM determines to let npm find its own path through this stuff. The way people use SemVers different than npm than in the Rails community, the way that these tools have developed has been parallel. There is definitely some convergence happening here but I think that it’s important that that we look at the usage patterns.
A lot of people are like you, Charles. A lot of people are just using the SemVer and their package.json files determine what gets installed and that’s kind of the way npm was designed to work. So we’re moving towards having it work more the way people expect when they do opt for that feature. But we don’t want to say make it the default or require you to use shrinkwrap all the time.
CHUCK: Yeah, that said most of the code I write is Rails so.
FORREST: Yeah, well I’m surprised you’re not more on top of shrinkwrap then.
CHUCK: I know. Well do you explain and I was like yeah, I like that feature. [Laughing]
FORREST: Well it works better in npm3 so give it a shot.
CHUCK: All right, will do. Are there other things that you’re putting in to npm3 that are going to change the way that people use it or the way the package authors write stuff?
REBECCA: There’s one other breaking change. It’s an obscure enough feature that I would be surprised if anyone has heard of it which is you can lock down butt versions of node. Your package would be installable through by setting engine strict in your package.json. Then setting engine node 0.12 and it’s no longer a strict requirement. It now warns if you don’t have that matching engine but it won’t refuse to install.
The way it was implemented was confusing because it would simply pretend that modules that didn’t have a matching engine requirement they would just pretend that those didn’t exist. And so if none of them matched, it would be like yeah, there are no versions released of that module.
FORREST: There was really one very important anti-breaking change in npm3 which is there’s this really common situation especially on OS X where if you use the installer that you got from nodejs.org and then later tried to install upgrade npm yourself, it would cause because the way permissions were set up. If you didn’t remember to use pseudo it would make it look like npm had just deleted itself in the process of trying to upgrade itself. We fixed that. That doesn’t happen anymore. We have changed how the permissions check works so that it is now far less likely that you will inadvertently cause npm to disappear by trying to upgrade it. I know that that probably seems fairly comical that we even have to mention that but it’s something that has been on my personal list of clown shoes things that needs change in npm pretty much since before I got here.
Npm the installer does a lot stuff in it and has a lot of special purpose logic for handling things like permissions that is not necessarily immediately obvious to users. So making this change actually did require some architectural changes around how and when it looked up the permissions to the places that it was installing to. If you don’t do that correctly you can basically cause some really strange raised conditions that are super unpleasant. So it was actually not a simple fix which is cold comfort to all the people like, why did it take you so long to fix this? But it’s fixed now so let’s accentuate the positive.
AJ: Well it seems like I don’t know if that’s an open bug and node but it seems like they need to fix that there because I really hate it when I install a new version of node or io.js and it tramples my permissions and user local.
FORREST: Yeah, I mean I hear you. The issue there is it’s not really clear what the expectations are, right? Those of us who have ever installed home brew or ports or something similar where one of the first things that they have you do is change the permissions of user local so that it’s owned by or writable by your user, your regular user and not a root or real account. That’s not really a posixy thing, right?
The way that Apple’s installer frame… I hear what you’re saying. If you use a package manager like nave or nvm that it doesn’t have this issue because it sets the permissions up for you correctly. But I’m sure there are at least two or three issues that have been already on both node and IO. But we’ve done what we can to fix it on our end.
AJ: Well that’s good.
AIMEE: Is it worth mentioning anything in the Tiny Jewels section? I was a little bit excited to see about the temp folder because that’s been a problem that we’ve had at work.
FORREST: You want to talk about that, Rebecca? There’s a bunch of those little features that are pretty interesting.
REBECCA: So like the temp folder one was something that came out while I was working on it. I was running the installer a lot of course while developing this. I was looking in and I’ve run into problems and I go look in the temp folder and it’s like hundreds of folders there because I don’t reboot my laptop. When would I reboot my laptop? Only when OS updates require it. So it’s not perfect yet but it does exert a lot more effort to clean up the things that it puts in temp. The permissions thing is actually something that the fix came out of the architecture because now it’s done – there’s this install stage before it actually tries to install anything where it checks all of your modules permissions in advance.
There’s npm dry run which is something that people have been asking that we’ve had small level of interest in for quite a while. Another one that it turns out it’s probably more useful to the npm developers than anyone else but you can pass –dry-run into npm install or uninstall or any of the other installer related commands and it will go through the entire process. It will print out results as if it didn’t install things without actually ever touching your file system to make changes.
One of the things that actually started as npm3 and got back forth at npm2 was adding more. So previously you’ve always been able to do a short cut of organization/repo to install stuff from GitHub. So we have this short cut for installing stuff from GitHub because that’s what most people use. One thing that npm3 brought was making it so that GitHub was no longer the only hosted repository service that npm knew about. So that made it so that you could do bitbucket:org/repo or gitlab:org/repo. And this extends out further than just setting that as your dependency.
One of the other things that npm would do is it would intuit what your issue URL was and what your repo URL was from what the human visible version of the repo was from these URLs. It would look at the URL for GitHub and it would be able to figure out where the issues page was. And now it can do that for GitLab and Bitbucket as well. So I think that covers our tiny jewels.
AIMEE: Yep. I thought some of those things are pretty good worth mentioning.
CHUCK: I want to talk briefly. We’ve talked about the rewrite that happened. I recognized that there’s a breaking change so you needed to up the version number. But why rewrite instead of rework the sections of npm or did I misunderstand the process that you went through with this?
REBECCA: The problem that we ran into is that it was such a deep change to the architecture of how npm installs run. The function that we could swap out was install. Deeper than that the previous npm as Forrest deluded to had this self-recourse of process where it essentially ran the installer at your top level and it went through and found all your dependencies and started downloads on all of them. Whenever one of those is finished it would conceptually change into that directory and then run the installer again for that new directory. It just kept on doing this until everything finished.
This was a straightforward way to write it but of course that meant that everything was happening at once. There’s no way to know when things would happen. It wasn’t something that could be like halfway and returned into a multi-stage process. It needed to stop calling back into itself. That was why we approached it as a rewrite. I mean it was done as a rewrite with the old code next to the new code copying across logic, business logic where we found it. So it’s not like it was throw that out and forget about it and go back to spec. It was deeply based on the old code.
A lot of the work that Rebecca has done has been to find the architecture that was emergent. It wasn’t really explicit all before and break that down into a set of component that probably eventually we’re going to try to extract out into a stand alone component. It was an incremental process. This wasn’t a traditional ground-up rewrite like it was very much like move things from one place to the other. But there was a lot there that basically needed to be pulled down and built up again from very close to the ground.
AIMEE: So one final question that I was going to ask about and I think it sounds like you’re interested in talking about it too is this push that npm is looking at the front end now because it’s been I think like unwritten where we use npm for our node packages and Bower for our front end packages. But I know a lot of people – I’m moving to npm for that.
So is it more that has been the goal of npm all along to look at this phase or is it more like something that’s being driven by the community?
FORREST: Both. The emphasis for this is definitely a lot of frustration that we see from people around. How complicated their frontend to link flows are. There’s a lot of merging standards around some of this stuff that just didn’t exist before. And they’re not really standards because there’s a lot of different ways to solve this problems but there are definitely a few conventions.
Bower’s a valuable tool. A lot of people are using it. We aren’t really interested in crushing Bower beneath our iron heels. But we are interested in looking at the features that people really like about Bower and at least making it possible to use them with npm. So the first steps of that are in npm3. The mostly flat tree is actually good enough for a lot of cases for people using it to build and manage web apps but it’s not the end of the story.
We have a roadmap. I’ve brought the link for it into our chat but you can just go to the npm wiki on GitHub and look for the roadmap and it’s fairly easy to find. Well I’m trying to put it into more structured form our goals as far as making this a better tool for frontend developers. There are a lot of pieces to this.
One of the first ones that we’re going to tackle is making it easier to treat ackages hosted on GitHub the same as they are in packages on the registry. Bower users and actually I think a surprisingly large number of npm users don’t have a really clear understanding of the distinction between a package that’s hosted on GitHub and a package that’s been published to the npm registry. I am actually regularly surprised by the number of people who are themselves surprised by the fact that when you run npm install it’s not just pulling everything down from GitHub. That in fact there is a chunk of infrastructure out there that’s responsible for handling 80 some million downloads of packaged tarballs a day.
So that said, a lot of people have workflows where they want to be able to publish something to GitHub. Maybe that’s how they work with their contract customers and they want to make sure they have private packages. We have added support for private packages on our own registry. We also are in a process of updating our organization and team support. But if you’re a mostly frontend developer and you are going to be publishing that many packages but you are going to be publishing your frontend work and you want to be able to use that.
One thing that I have decided is that we’re not addressed in picking a winner. So if you are using webpack or Browserify to deal with your frontend dependencies now then you should be able to continue to do so going forward. It’s not just those two. If you’re using parcelify or atomify like a whole bunch of solutions that kind of sprouted up in the space. Then you can continue to do so. That also of course extends to tools like [inaudible] and Grunt and Yeoman but we do want to make a smooth transition path out there. We do want to provide affordances to make it easier for you to basically just run an npm install. Have something that has not only whatever backend components you have ready for use but also has install package that prepared whatever frontend assets you need to serve up for your application.
There’s going to be a lot of pieces in between and on them but those are basically the two biggest pieces of that. Making it sure that your workflow can move seamlessly from Bower to npm and also making it much easier for you to use your front end tooling with npm.
CHUCK: So I guess the other question I have is let’s say we’re using npm2 in projects currently, how do we transition to npm3?
REBECCA: There shouldn’t be anything that you have to do it other than start using npm3. If you have an existing install, you can run npm-dedupe and npm3 and it will give you as maximally a flat tree as it can make from what you had. It’s not guaranteed to be exactly what you would have gotten if you’d run the npm install with npm3 without a node_modules folder because npm- dedupe doesn’t install new modules. It just moves around the stuff that you currently have in ways that remain compatible. But yeah, starting with npm3 is as easy as installing npm3 and giving it a try.
CHUCK: That’s what I was hoping you we’re going to say.
FORREST: Yeah, another point to make there is installing npm3 is pretty easy. It’s just email@example.com will get you the most recent. But one version of [inaudible] is how we baked and get some ice on it. And then you can also just do firstname.lastname@example.org if you want to get the absolutely most recently published version and shake that down.
We’ve gotten fantastic feedback from the community. I’ve actually been really gratified by the level of feedback that we’ve gotten from everybody. It’s been really helpful even just the bug reports. They’re very useful. There are definitely still some outstanding issues in that which is why it is still in Beta but beyond allowing all the people whose cheese have been moved by the changes to Peer Dependencies and engine strike. There’s also just the fact that periodically for reasons that we’re still analyzing the installer just kind of goes off to the moon and blows up with the arranged error because of some records of thing that we haven’t quite nailed down yet. There are a few other examples like that but at least it’s getting shorter over time. It’s largely getting shorter as quickly as it is because of the feedback from people who are kicking the tires and checking it out.
The flipside of all that we’re still working out how to get npm3 packaged with io.js and Node.js. We’re talking to the new node LTS project. There’s a lot of work going on to converge the fork back into the the main version of Node. We’re also trying to figure out how do we offer long term stable support to users who need a more robust life cycle management and can’t be just keeping up with the latest and greatest all the time. I think at this point, I actually have committed to supporting npm2 for at least another six to twelve months basically.
So what that means in practice is that there are going to be continuing some new features for working with the registry that are going to get rolled out to npm2 as well as npm3. Then of course bug fixes and any security-related issues will be fixed in npm2 and npm3 simultaneously probably for the next year.
CHUCK: You said that it takes off and goes to the moon sometimes. Is that why it’s still in Beta?
FORREST: Yeah, that plus giving people an opportunity to validate the change. The change to peer dependencies is actually the main reason aside from the fact that we just made a bunch of architectural changes in the process of sorting out the transition from 2 to 3 which almost certainly means that there are some breaking changes in there. The peer dependencies change is the biggest reason why there is a bump from 2 to 3.
Those bits of tooling are in very heavy use. Gulp and Grunt are some of the most popular packages on npm and it’s very important that we make sure that they are comfortable with the changes. Rebecca, I think that you had said that by and large they’ve been pretty mellow with these changes. You want to talk about?
REBECCA: Yeah, well the community has really been fantastic. I think that our cautious approach to npm3 with the Betas has helped a lot. The quality and tone of the issues people raised have been exceptionally good. So, very happy about all of that.
FORREST: And to answer the next question as far as when it will leave Beta, that is actually fairly simple to answer. It’s when we are confident that the list of blockers that we have is either zero or low enough that we are ready to pull the trigger anyway. We have a weekly standing meeting where we talk about the state of the build. When we are confident that we have got the things that are actual serious blockers that could cause serious problems for people running npm and CI or doing production work with it have been addressed. Then at that point we will change the [inaudible]. At that point when people run their updates, they will get npm3 and we will start the process of getting npm3 incorporated into io.js and Node.
CHUCK: All right. We’ll let’s go ahead and do some picks. Before we do that I want to give a quick shout out to our Silver sponsors.
CHUCK: All right. Aimee, you have some picks for us?
AIMEE: I do. My first one is going to be a site called slacklist.info. There are a ton of different slack channels out there that you can join. This lists some of them. Then you can go on looking for others as well. It probably had been mentioned in there. But besides the one that we have at work, I’ve gone and joined the bunch of other. There’s one, it’s called CodeNewbie that is associated with the CodeNewbie podcast. There’s a bunch of others. So that is my first pick.
Then my second two picks, I was getting ready for another podcast and the other podcast we were talking about performance, then also just this concept of Perceived Performance. So there are these two different videos at Fluent Conf. One was The Keynotes from 2015 and then the other one was one given in 2014. I just thought both of them really gave an interesting perspective that they covered some different thought processes that we don’t usually think about. So those are my other two picks.
That is it for me this week.
CHUCK: All right. AJ, you have some picks?
AJ: I want to pick Subsistence Farming because today is one of those days when I just wish computers didn’t exist.
We all have these days, don’t we?
I’ve got a couple of picks. The first one was I was interviewed on a podcast called Developer on Fire and I had a great discussion there with Dave Rael. So I’m going to pick that. It was a lot of fun.
So yeah, so those are my picks.
Forrest, what are your picks?
My other pick is I came up when I was thinking about tools that I use at npm a lot that are maybe not really obvious to people. So I spend a lot of time reproducing issues and a lot of time just trying to quickly sketch out something with some packages. I need a way to basically create a folder and get that structure in place to just make it work. To do that I use the npm init command. When just used on its own, it will run you to a little wizard that will have you name the project, use its license, talk about how you run the test command. It basically just prompts you for a bunch of stuff.
But if you just want to do something right now really quickly, you can run npm init-y. It will generate a package.json for you with the package name based on the folder that it’s in. It will even do things like look at any existing Node modules and write them into the package.json for you. It’s even got a tiny not tremendous but a tiny bit of intelligence to figure out which of those dependencies are tools that are only used in development and tools that are used in the actual package itself and will sort them out for you accordingly.
So if you haven’t ever played around with that give it a shot. It’s a great way to do rapid prototyping and it’s pretty effortless. You may want to take a look at the documentation to figure out what properties you need to set in your configuration so that it gets the right license and your preference for a lot of other things but give it a shot.
CHUCK: Awesome. Rebecca, what are your picks?
REBECCA: I’ve got a few. So last May I gave a talk at my local EmberJS meet up on what was coming with npm3. So if you want to hear more and see some slides, I have a link to the video of that.
My real picks are the Open Source and Feelings Conference is coming up beginning of October. I for one am super excited about it. Npm is sponsoring it and a bunch of us will be there so I hope to see lots of people there. Looks fantastic.
Like Forrest, I have a couple of obscure npm commands that I use all the time. And that’s the trio of npm docs, npm bugs and npm repo. For a named module, they get you easy access to the documentation issue tracker or web view of the Git repository respectively. I find myself doing this all the time. If I’m working in a module and I see it’s using something and I’m like, okay yeah but how does that actually work. I can just try the npm docs and the name of the module and it opens up my web browser on its documentation page.
By default, that’s the GitHub Readme but for larger projects, beware for they’ve set their homepage show. So that’s a thing I use daily. Those are my picks.
CHUCK: Awesome. All right. Well if people want to follow the npm project or anything with what you folks are working on, what should they do?
FORREST: Two things. One is to follow me, othiym23 on twitter and Rebecca who is ReBeccaOrg both words is jabbed together, no hyphen or underscore. Also our colleague Kat as well, she talks about npm stuff a lot. She’s maybekatz on twitter again no punctuation in there.
Also, if you want to keep up to date with what’s happened in recent changes it turns out that we have a page on GitHub that there’s this releases page functionality. I think a lot of projects don’t know about but we make very heavy use of. You can actually use three datas in Atom or RSS feed. I’ll drop a link to that into the end to the show notes.
We also have our Roadmap which I have mentioned earlier. That is a good way to keep an idea. We always update with that with what we’re working on that week and what our immediate and not so immediate horizons are as an organization. So we have a lot of ways to keep track of us and you should check some of them out. Maybe one of them will be something that gets to be part of your regular workflow. We put a lot of work into making our release notes, at least approachable if not always entertaining or amusing. So give them a shot.
CHUCK: All right. Well, thanks very much. Well, I guess we’ll wrap up the show and we’ll talk to everyone next week.
[Hosting and bandwidth is provided by the Blue Box Group. Check them out at BlueBox.net.]
[Bandwidth for this segment is provided by CacheFly, the world’s fastest CDN. Deliver your content fast with CacheFly. Visit CacheFly.com to learn more.]