018 RR What Not to Test

Download MP3

When not to test:

  • In a new startup trying to get funding
  • It’s too hard to write the test
  • It’ll take too long What is a test?
  • Verifies code functionality
  • Automated testing
  • Manual tests Only run the tests you need to run. Guard Autotest Timecop Redis Testing myths:
  • It’s too hard to test this.
    • This is usually a design flaw.
    • Forked processes are hard to test.
    • Complicated User Interface stuff.
  • It’s better/easier to fix it first.
  • You can’t test code in Rake Tasks.
  • Testing is hard or Writing code and tests is harder than writing code.
    • Is this code for “Testing requires discipline?”
  • Don’t test “One shot code.” - You’re only going to run it once, until you need to run it again.
  • You don’t have to test migrations. TDD gives you a regression test suite and better design. Do you not test because of ROI:
  • Routine tasks are diminished ROI in design.
  • Duplicate code
  • Low Risk code Testing Javascript from Rails:
  • Jasmine
  • Vows
  • Jasmine Headless Webkit
  • Zombie The clock is a global variable. Code coverage:
  • Ryan likes to have 100% code confidence.
  • Coverage means exercise, not correct functionality.
  • Rails’ migration code didn’t get covered and then got crufty.
  • Code that is difficult to test is difficult to refactor, reuse, extract, and modularize. It’s a “meta-code smell.”


CHUCK: Hey everybody, and welcome back to the Ruby Rogues podcast. This week, we have five developers on our panel. We are going to start out with our guest; he is visiting us from RailsCasts.com, Ryan Bates. RYAN: Hello everybody! CHUCK: Now Ryan, we usually just do some kind of quick introduction, so if there's anything else you wanna let us know. RYAN: No, that’s it. CHUCK: All right, we also have James Edward Gray. JAMES: That’s ‘Edward,’  not ‘Earl.’ I'm not a Tea. CHUCK: [Chuckles] David Brady. DAVID: Team Edward, just like James Edward Gray. CHUCK: [Chuckles] And Josh Susser. JOSH: Thoroughly Team Jake. CHUCK: [Chuckles] And I'm Charles Max Wood. If I fall asleep, it's because my wife just had our fourth child. And yeah, so I didn’t sleep last night. JAMES: Woohoo! JOSH: Congratulations, Chuck! DAVID: Congratulations! CHUCK: Thanks guys. All right, so this week, we are going to be talking about some stuff around testing. We have a couple of topics that we are going to cover. I think the first thing that we were going to talk about is What Not to Test. And just to humor Josh, I think we need to ask, “What's the definition of ‘testing?’” [Laughter] JOSH : That is so predictable. [Chuckles] CHUCK : I love it when Dave gives the definitions. JAMES: Yes. DAVID: You need to give me more time to prepare for. So the definition of testing… oh, gosh. It's that crap you do at testing center, when you get the number two pencil and you fill in the circles. Testing is when you assert that the program does what it is supposed to do, and that means that you have to say what the program is supposed to do. CHUCK: All right. RYAN: It doesn’t involve hitting ‘reload’ in the web browser? CHUCK: [Laughs] JAMES: That’s one kind of testing. DAVID: It can – in a horrible, horrible world. CHUCK: As a QA person, I actually lived in that horrible, horrible world. They wouldn’t let us script any of our tests. JAMES: You know, as much as we make fun of that, I actually have known some people that never hit reload in the browser, and they believe that you can just tell everything from a test suite, and that doesn’t work either. DAVID: I have been bitten by that in the last two weeks; I had everything green across the board, all my Cucumber features are passing, I had 100% code coverage, and I obviously… and I've launched the app at 10 o'clock at night, and I went to bed, I got up the next morning and yeah, the critical path  didn’t actually work, because there was something that I never bothered to write or test. CHUCK: Oh wow. JOSH: Okay, so that’s not one of the things you shouldn’t have tested. DAVID: Right. That was a bad call on my part. CHUCK: [Chuckles] Test the critical path, is that the moral of the story? JAMES: [Chuckles] That’s awesome. DAVID: Yes. CHUCK: [Chuckles] JOSH: Okay, we are done for this week. [Laughter] CHUCK: Yeah, if it's important, then test cover it. JAMES: How about if we come at this one from a slightly different angle? It's probably worth pointing out, Ryan reminded us after we invited him to be on the show that we've actually discussed this topic before. Testing is probably going to be one of the program topics we kind of come back to again and again, just because it's huge and we get new insights, so we hear new things and stuff like that. So, one of the reasons I was kind of interested in talking about this one again is I just got back from Lonestar, and at Lonestar, Obie Fernandez and his keynote, made a comment about his beliefs that startups in the early stages, like for example if you are just trying to put together a prototype or something to get VC funding, maybe. Then he actually argued that he thinks it's wrong to test, because in that situation, you are going to be probably formulating a lot of ideas, trying some things and at that extra time you spent making it solid is just wasted time -- if you are just playing around with an idea. And then of course, after you've secured funding and you know what path you are going forward or something like that, then it's acceptable, or not even acceptable but desirable, I guess. DAVID: We only have 53 minutes left in this episode, so I'm just going to say, I don’t know where to begin saying what's wrong with that, and I’d let other people talk. JAMES: [Chuckles] Awesome. Well, let’s come at it from the, “Is it okay not to test?” angle. How about we go around the horn on that one? RYAN: I can start on that. I think I do that to smaller scale in a way you are talking about, what Obie said where sometimes, I'm not sure exactly how to design something, and I will just take the current get check out just try to something out, experiment with it  a bit and just do get reset and then start over from test first development, once I get an idea of, “Okay, how do I want this to work?” Sometimes I do that, but that’s sort of a smaller scale than what Obie was talking about. I don’t know if I would use an entire project and just not do any testing. CHUCK: So I wanna chime in here, because it sounds like what you are saying is similar to what Obie is saying in the sense that he's saying essentially, “As a startup, you don’t know what the throw away code. It's all a big experiment.” So you know, you effectively are doing that. And then when you are ready to go, you do a quick reset or reboot, and then start testing in earnest because you know where you are going, and what you are building. RYAN: I guess it's a way to define the spec, maybe? CHUCK: Yeah, better. JAMES: Yeah, I believe that’s where he was going with it. I did a bad job of relaying the story. But basically, he had a scenario where he and his partner came together to work on a project, both being for big Rails consultancy, they were very used to doing it the professional way, and they went forward, building a lot of well-tested, well developed code, got to a point where they realized the subset of their problem is the interesting part, which made them end up throwing away a lot of that effort. CHUCK: Right. I do know that some people -- and I'm not one of them -- claim that they can actually write code faster doing TDD. JOSH: So that’s interesting because there were some numbers that were being tossed around in the last day or two about a study, and I'm trying to find the citation, but I can't find it at the moment. But they are saying that the quality or the defect rate of code done by TDD was something like up to 90% higher quality code or 90% fewer bugs, but it took something like 15 to 30% longer when you’re doing TDD. So that’s just one set of numbers. RYAN: I think the time it takes is a little bit hard to measure, because we also have to take into account the time that it will save you later on, in the future, assuming, I mean, the spec, you know don’t change drastically. JOSH:   Sure, that’s a great point, but I think that’s what Obie was getting at is that, if you are kicking off a startup company and your number one goal is to get to the point where you can get funding to create the product that’s going to ship, then it's more important to get that done quickly, rather than have a high quality, long-term, maintainable code base. DAVID: So there's a balance, right? I mean, I don’t wanna test my … on my mock ups. I don’t wanna waste a whole ton of time testing a prototype to make sure that it's airtight. And I can see that. And my head kind of itches when people say that testing is, “It’s twice as many lines of code, therefore it is twice as hard; therefore if you go without testing, that’s half as easy,” or twice as easy, and I don’t know, I think those people are just coming at it wrong; their whole approach to testing is they haven’t honed the testing to a point where it's their go-to tool to develop code the first time around. Having said that, whipping up a quick prototype and showing it to the customer, that’s a perfectly valid test. If I can go back to my own definition, that asserts that the code does what it should do; it should look like this, “What do you think? Will you give us money?” “Yes, we will.” “Great. We win.” So it passed the test. However, if you stay at that mode; if you statically compile your brain into that mode, you are now in “Don’t test ever,” mode and you are now writing legacy code. And you have to recognize that if you are not ever going to test, you are an idiot. And you start off with this goal of, “Hey, let’s stop testing because then we'll moves faster,” that is a very seductive statement and it's going to tempt a lot of people over the line into writing legacy code. JAMES: I do like definition, that kind of broader definition of a test, that I don’t know about you guys, but if I wanna destroy something I've written in about ten seconds, I just go get my wife and sit her down at a keyboard -- and I'm not saying that my wife is evil or has gremlins or anything like that -- what I am saying is that, she doesn’t have the assumptions I have, so when she sits down on the keyboard, the first thing she tries is the thing I think, “Well, I didn’t think of that.” And it all falls apart very quickly. JOSH: So does our default definition of testing mean “automated testing?” DAVID: Not necessarily. JAMES: That’s a good question. I see what you are saying. I'm glad you're going there because I was actually going to ask if using auto test or Guard are good practices. RYAN: Before we get into that James, I had a question to kind of extend what we were talking about earlier, and that is do you think testing -- automated testing in this case -- makes changes more difficult later on, if you aren’t sure of the user specs? Sort of going back to what Obie was talking about, if you aren’t sure exactly what the features you want in the application, do you think having test written out completely makes it more difficult to change those features in the future? You [inaudible] behave a little bit differently. CHUCK: I think there is a cost, mainly because you then have to not only invalidate code, but you also have to invalidate tests, and you may have to rewrite other tests that relied on that, but still provide coverage on other functionality that you wanted to keep. So I mean, there's definitely added costs in that. I don’t know if that necessarily makes it good or bad, I think what we’re really discussing here is what are the tradeoffs. DAVID: I´ll weigh in as well and say that every time I have heard somebody make this argument, there has been an excruciating amount of pain in poorly factored tests, to the point that if somebody says, “Well if changing the code means I've got to change my test, so let’s not write tests,” I just immediately assume that they are writing a poorly factored test. I would love to be proven wrong on that. I would love for somebody to come to back to me and say, “Well, I've got a really good non-duplicated test suite, but we've spent all these time cleaning up the test suite, and we spent all these time, blah, blah, blah.” And should you do that while doing exploratory test… okay, you shouldn’t… if you've got to move fast, you shouldn’t. But at some point, you got to realize that you are mortgaging your future in an exorbitant interest rate. JAMES: I was thinking about the whole, “Is it hard to change with a good test suite?”  In my experience, I haven’t really run into that a lot. I've heard people use that argument quite a bit, but I find usually if I'm changing something, then at worst, I have to yank out a few test than I had before. My delete key is pretty fast; it goes down real quick and comes back up real quick. So I think actually one thing that kind of bothers a lot of people, I have met multiple people who are very allergic to the delete key. Like once they’ve put it in there, they are invested in it and they care about it. And it makes them feel bad to say, “Oh, I was wrong. And I need to just yank these ten tests.” And that doesn’t really bother me. If I've learned something and gained new ground, then I don’t have a problem pushing delete. DAVID: Mh-hm. I'm with you. JOSH: So everybody knows the red green refactor cycle for TDD. There's another step where you go red, green, refactor -- and then you refactor  your tests. DAVID: I like that. JAMES: I like that too. I think I don’t do that fourth step often enough. My test code does tend to be a little bit messier than my main code. JOSH: I think that’s a pretty common occurrence, if you look at the typical project. CHUCK: That’s okay, your test don’t actually do anything important, so… DAVID: They’re not really code. CHUCK: Yeah, exactly. JOSH: So I think it's interesting. To get back to the topic of what not to test, and when you shouldn’t test, there's a couple common things; like one, “It's too hard to write the test,” and the other one is, “The test would take too long.” And I saw a presentation last night about a big, distributed, parallel testing system that they put together at Square. They had test runs that were taking many hours in CI and they came up with a way to distribute and take advantage of parallelism to deal with that. But if you are trying to run tests locally, and it takes you twenty minutes to run your test suite, that it's really hard to get into the red green cycle. And to respond to James, I think things like Autotest or Guard are great to have stuff running on the background, but I don’t like them when you make a small change, and if I make a change, I know exactly what test I wanna run when I'm test driving. And that might take me only a few seconds to run those tests, but if I have to wait for Autotest or Guard to catch up with by running all of the dependencies, then that might be an issue. And I know that those things have gotten smarter about trying to focus on the tests that actually matter. CHUCK: I wanna jump in here and ask another question then, because sometimes you change something and inadvertently, will break a test that isn’t on your list of things that you need to run. And I think that’s where Auto test and Guard and some of these things come in handy, because they'll actually run your entire test suite and so, if you fix something up in your model, and you break one of your test in your controller, then if it wasn’t something that you knew you needed to run the test on, it will catch that. JAMES: Let me just make a comment about how Guard actually works. I've been using it quite a bit lately, and the way Guard works is when you fire it up, it runs the entire test suite, and assuming that pass are good. And then going forward, it runs based on these patterns you write; so you basically tell it, “If I change this model file, then go run this file in Test::Unit, blah, blah.” And so you can kind of link those models to the tests that cover them, and it runs those files. As soon as something fails, then it goes into a failure state, where it just keeps rerunning the fail stuff as you make changes. And then once you've succeeded in passing that thing that was failing, it goes back to running the entire test suite, it resets to run the entire test suite. So that basically to verify, “Okay you fixed that problem. Okay, Let’s make sure you didn’t screw up anything else,” is kind of what I think it is. And so for the most part, it is very intelligent pattern, I think. And it does do a pretty good job of zeroing in on the thing. I do have to agree with Josh though; when I'm just doing it myself and driving, I am faster. Sometimes when I'm waiting on a run, I´ll go back and work on something else I know needs to happen, and so I´ll get that file saved because Guard is done checking whatever it is going to check and stuff like that. So when I am just moving around myself, I can get ahead of it -- and that bothers me a little bit. However, I have a bit like if I'm going to try to dig in… if I'm going to be here a while, working on something, then I’d like to go ahead and set up Guard, and let it walk me through the process. And I love Guard’s ability to do things like bring up Redis for me and stuff like that. So, I found it helpful to use, but I do agree with Josh that I can move faster. RYAN: I love Guard. I agree with James there. It's definitely an awesome tool. But I do agree with Josh too, I find that if it takes over 30 seconds to run, I generally fall back to running the test manually because I don’t wanna be stuck in that case where I am waiting a minute after I make a quick change, and that breaks my red green refactor cycle, or I'm stuck waiting on all the tests to run. I like to isolate, “Okay, just this test file,” then I can go much faster that way. CHUCK: So you are not a fan of red, bathroom break, green, refactor? RYAN: [Chuckles] Exactly. DAVID: I have what I call the “Twitter limit” which is seven seconds. From the time I hit cmd+s or ctrl+s and save the file, if I don’t have a green or a red within seven seconds, I'm looking at Twitter; my ADD kicks in. And so when people talk to me about 30 second test suites, I really start to cringe. There's people talking about 30 minutes test suites, I'm like, “Okay, we got a problem here.” I don’t know, maybe I'm a weirdo, but certainly, your focus tests should be in and out in just single digits of seconds -- which is why those [inaudible] is really painful right now. RYAN: For the most part, Guard, I find solves that problem pretty well for me. Just a couple exceptions in my case, where I have a fairly large test suites. In a couple of apps, I have to fall back to something else, but for the majority of projects, Guard works really well. DAVID: I'm okay if your integration suite takes two hours. I mean, that’s something that CI machine can do, it can come find you when things go red. James, I find myself getting half a step ahead of Guard a lot, where I save the file and I start writing the next thing, and then Guard comes back and gives me the result of the previous one. And as long as it's green, green, green, green, I'm okay with it. It's when it gets red and I'm a step ahead, that I have to be willing to back up and, “Okay, I got to do baby steps. I got to make things [inaudible].” Does that make sense? JAMES: Yeah, I run into some issues where I find a bug and I'm like, “Oh, I'm going to need to change like three different things in different places for that.” And I find myself actually getting where I've changed to all three files and then I'm debating with myself about where I hit save first… [Laughter] I realize when it gets me into those scenarios, it's not a good thing, because it's actually counterproductive. I have really liked using Guard and working with it, but I do think it's good to notice that it's trade off, and there are times when it's okay to go without it. CHUCK: How much time do you spend configuring Guard, like when you create a new model or something? JAMES: None. When you set up Guard, you put in these regexes basically that say, “When this file changes, you´ll find the matching test file here.” So the only time you really end up changing it is when you do something like introduce a new dependency, like Redis or Rescue or something like that, and then you want Guard to automatically start that up for you and … back down down. It will watch your bundle file and redo your bundle when you make changes there, your gem file, I mean, and stuff. And so that’s a pretty neat aspect of it is that it's kind of all in one. In fact, I find that I often go to the terminal to do it myself, but then realized that Guard has already done it. And it's like, “Oh yeah, I forgot.” It's kind of neat. CHUCK: That makes sense. All right, is there anything else we wanna say about what not to test? JOSH: Do we want talk about some of the “myths” of what not to test? CHUCK: Hercules to the rescue. JOSH: [Chuckles] I prefer Xena. So, I bet James has a good  myth that he can bust. JAMES: Me? Nice. [Chuckles] Why do I always get in trouble here? DAVID: You keep starting it. [Laughter] JAMES: Good point. He started it. Let’s see. A myth on what not to test… Oh geez, that’s a good question. JOSH: How about, “It's too hard to test this?” JAMES: Yeah. That one usually turns out to be bunk. Although there are two cases I'm sympathetic to. But I guess I better be careful here. I do agree that, “it's too hard to test” is usually a bunk. If it is too hard to test, that’s usually a design flaw, in my experience; that you haven’t written a code, you haven’t busted it out enough and separated things. Avdi have some great ideas on stuff like that. I was running into time problem, not too long ago where I have some tests that would only fail in the morning, but they wouldn’t fail in the afternoon. I've used Timecop and put it in there. And if you guys don't know that gem, it's a great one to lookup; it lets you control the time so that you can  say, “For right now, the time is such and such…” so that you can make it be morning in the afternoon, and thus your tests will start failing and figure out what's going on there. CHUCK: Now I'm confused. JAMES: [Chuckles] Right. Now it's confusing. But the cool thing actually, Avdi even gave me a better idea that I like better. And he said, “When I'm doing time calculations, I make sure I'm always passing my clock into the method.” So basically, a method should not be generating its own clock. Like it shouldn’t say “time.now” because then it's using its own ID of a clock. DAVID: People forget that the system clock is a global variable. JAMES: That’s right. So if I'm passing my own clock in, then in the case of my test is trivial, because I just set it to whatever I want it to be and pass it in. And in Ruby, that’s really trivial because all time is just some big struct object with some method you call on it, you can create a little struct yourself with one year or day, whatever method you’re going to use, and set them to whatever you want and you’re good to go. So, I thought that was a great tip from Avdi. So I do agree that usually, “it's too hard to test” is kind of an excuse. Another thing, external web services I think being the perfect example of when you need to introduce a little mocking, and maybe we can talk about that a little later, but there's ways to test them. There are two cases I'm unsympathetic with, though. When you have like forked processes, and you are checking their interprocess communication or something like that, I've tried to write tests for that in the past and found it very difficult And I've really tried it in a lot of different ways, so I tend to just try to make sure that one guys is doing everything I expect him to do, and the other guy is doing everything I expect him to do. But it's very difficult for me to test them both in tandem. I mean, at that point, you are basically dealing with three processes, because you have the two that are trying to talk to each other, and then the test suite that’s trying to referee. And I find the game just gets out of control. But then the other thing being complicated user interface stuff, though we have some pretty good tools for that now. RYAN: Do you ever find a case where you know exactly how to solve this problem by the implementation; you just need to change a couple of lines, but to test this is really difficult? Do you ever find a case  where you say, “I just wish I can change the implementation and not have to go to testing.” JOSH: Oh, yes. RYAN: I run into that once in a while. And it takes a lot of discipline to go, “Okay, let’s start with the test.  I know it's an easy fix, but start there.” JAMES: I’d say that specially comes up in bugs. “I always prefer to write the test first, to protect against regressions,” but it's like you say as soon as I've seen the email come in from the user, I know exactly what line of code it's in. I just wanna go fix it. JOSH: Right. I have a myth and that’s, “you can't test code in rake tasks.” JAMES: Yeah, I don’t like that one. JOSH: There is no built in test harness anywhere, for like if you are writing a Rails application and you wanna add some rake tasks, there's no folder to put your rake tasks tests in. But it's actually trivial to write all the functionality of what you do in the body of your rake task, as a library class, which you can test totally fine in your Test::Unit or Rspec. And then the entirety of your rake task body is just a one liner that calls that class somewhere. DAVID: And a one liner, you can test by inspection. JAMES: Yeah, I think that’s the right way to write any rake tasks, is that the only thing it should do is call in to some method. A lot of times for me, it's just model method, a class method, or a model usually or something like that. But I think that’s the right way to write a rake task. DAVID: So I’ve got a dual myth. CHUCK: Okay. JAMES: Impress us. DAVID: All right, so once side of the myth is, “testing is hard.” And the other side of the myth, a little more refined is, “writing code plus writing tests is always, by definition, harder just writing code because of the additive property.” So both of these are based on the assumption of a static skill set. I just wanna respond to people when they say, “testing is hard,” the answer is not, “don’t test;” the answer is, “get better at it.” And when people say testing is harder than not testing, test plus code is greater than just code, there's this thing that started happening. Maybe it's  just early onset Alzheimer’s, I'm not sure, but there's these… JAMES : It is. DAVID: Okay, good. Do you have my pills? CHUCK: [Laughs] DAVID: I've been meaning to ask somebody. There's this weird thing that started happening with me and my code recently, where I've started pair programming with my test suite. And I'm talking like the good kind of pair programming where like, you don’t just have one person driving and one person watching, but you are actually having an active conversation, and there's this gestalt that emerges of two brains working together. And I've got this test suite that’s got my back that’s checking everything, and I stopped watching my own back. I just focused on what's next, what's next, what's next and this test suite is coming along behind me, locking everything down and holding it all in place. And then I get at the end of an hour, with no idea where the time has gone and I turn back and I go, “Holy crap, I wrote all these code.” This is the same guy who, at the top of the hour, said that he built a whole site, shipped it and then woke up the next morning and said he missed a step in his critical path. So take it with a grain of salt. But my point is that while I was doing that type of testing, or that type of development, the testing plus coding, was a different type coding; it was not the same kind of coding. It was not simple, it is not simply reduxed to ten lines of code plus ten lines of tests equals twice as many lines of total code produced at the end of the day. There is… “just get better at testing” is the myth I'm trying to blast here. Don’t assume static skillset, don’t assume that testing is always hard. And if testing is always tangling up around your ankles, taking them up to be sensitive to that, that’s a code smell. That means you code are not well-factored; you need to go back and refactor them. JAMES: I definitely agree with what you said. I actually have been thinking about the what people mean when they say, “Testing is hard.” And what I believe is that what they are actually trying to say is, “Testing requires a lot of discipline.” Usually, when I talk to them and get to the meat of the problem, the kinds of things they say, “Well, I whip up this controller and I run through it in a browser and it's working and then, Oh, I need to go back and write the test for that,” And that’s hard for them, which of course eventually, learn that it’s actually easier if you’d start the test site and come forward, because you can push those things forward together at the same time, instead of going back and playing big catch up every few moments. But I do agree a little bit that it does require a lot of discipline. In a way, I think that’s one of its strengths is it does require much discipline. You know, let’s face it, when we sit down and write 600 lines of code on check, we make a mess just because we are dumping from our brain. But when we sit down and force ourselves to be disciplined about it, that we tend to do a better job and think things through better. DAVID: I like to think of testing as a way of scaling your complexity. Of course, anything that is web scale, right? “Whatever they do to get those kick ass benchmarks, that’s what I´ll do.” But testing is complexity scaling. And that means you have to invest upfront, you have to learn something new, you have to try something a different way of cross checking yourself. But yeah, you are right. At 60 lines, that test feels like nothing but overhead; at 6000 lines of code, that testing is a godsent. RYAN: I think for me, the hardest test is to write [inaudible] as a first test because that’s sort of the same foundation of, “How am I going to really design this?” And then once I get the first test going, then the second test starts rolling, and test becomes much easier to write after that, because I'm sort of on this ball that I'm modifying slightly. But the first test is always the hardest, and I think that’s is what a lot of people get hung up on is, “Okay, how do I start this?” JAMES: I got to bring up this funny thing that I do sometimes, that I notice that’s definitely wrong; sometimes I´ll write a little method that it has this… it takes in some argument, does three steps on it and hands it back out or something like that. And then, I´ll go in and I´ll write the tests, one side of the  assert equals, I just call that method; and on the other side of the assert equals, I just do the exact same three steps [chuckles] so I'm like testing itself with like a copy of itself, and I found myself doing that  a couple of times and that’s kind of embarrassing. JOSH: That drives me a little nuts as well, and that’s one of the cases where I wonder, “Is it even worth writing the tests?” JAMES: Right, if it was so simple, yeah. CHUCK: That’s one thing that I wanted to jump in on too, is are there instances where… we are talking about when are not test; are there instances where you don’t test because it's just not worth it? JOSH: Yes, yes. And there’s a couple of angels on it. One of my angles on that is when I'm doing TDD, there's two main things that you get out of it; one is you have a regression suite. When you are done, you have test coverage, and you don’t have to worry about if the code is working or not. But the other thing that you get out of TDD is that it drives the design of the software. And you end up with software that is, by its nature, better factored, because to be able to be testable, you can't have all these gigantic methods, and have the big mish mash and stuff. So if I'm doing something that I've done twenty times already, and I know exactly how it’s supposed to work, then the benefits that I get from TDD… I still get the benefits of the test coverage, but driving up the design isn’t going to be as helpful for me for TDD. So if I'm doing a typical kind of restful controller in Rails -- which I've done so many times now, I can't even count -- I don’t need to have that help me flush out the design of it. And I probably don’t need to worry too much about the test coverage on it if I have five or six controllers that are all operating very similarly.  So I will sometimes slack off a little bit on there if I'm feeling a little bit schedule pressure, and if I have to make some tradeoffs and figure out where to cut corners, that’s a typical place where I will cut a little bit. DAVID: So I have a question, Josh. There's a pattern that I see in my code; I'm curious to know if you see it as well, where I´ll have some really in depth test just drilling to this controller, drill and drill and drill and drill, and over the top, there's an acceptance test. And then, there's  another controller that works kind of the same way, and all it will have is the acceptance test. And a little note that says, “If you really wanna get into the details, go look at this other one here.” But it's the same stuff, so there’s duplication. JOSH: Yeah, absolutely. DAVID: So I´ll have one drill in, and a coverage; and then I´ll have three coverages. And I'm content with that, because I'm not really worried… it's all happy path. I don’t need to test happy path implementation if there's covering of acceptance test. JOSH: Yes, that's a great example. JAMES: I wanted to comment on what you guys just said. I always say that tests are risk management, right? And if it's something like you said, that you've done ten thousand times, you know it like the back of your hand, you are not going to screw it up, then the risk is very low; so you need a very low risk management, right? You are not going to screw it up. But if it's something that is new and you are feeling your way in the dark, then… I think it was Uncle Bob Martin that said in one of his Rails key notes, that when your brain surgeon gets in there, and the operation gets more complicated, you don’t want him to freak out; you want him to take a step back, calm himself down, and bring every tool he has to bear on the problem; to become more professional, to take more baby steps, to be more careful, because it's you brain in the line, you know? And I think that’s exactly how testing works. When we're  cruising along, when we know what we are doing, when we are comfortable, it's okay to flow. And then when you are not, then you need to be a professional, take a step back and start baby stepping. DAVID: Professional falls back on his training. If a professional falls back on his training, not on his instincts, when he wants to panic. CHUCK: So I have to ask then, we're talking about ROI, and another instance that I wanna explore a little bit is code that maybe changes often or… I don’t know, where it basically makes the test brittle, not so much that you get a low ROI because you know the code will work, but maybe you get a low ROI because it's going to take lot to maintain it. [silence] Are you guys still there? JAMES: [Chuckles] Yes. We are thinking. DAVID: Yeah. JAMES: But I definitely see what you are saying about if there's code that changes often. I think my main concern there is if there is code that changes often, then why is that happening? Is that happening because that second is being rebuilt all the time, or is that happening because that code is designed badly and as you put more stresses on it – like, from the outside -- then you are finding that it's very not capable of what you needed it to do? In that case, my prescription would be, up the test, not lower the test, right? CHUCK: Right, because it's high risk. JAMES: Right. DAVID: I would actively flip what you said as well. If there's a piece of code that’s static… not important, I think it's a very powerful statement to say, “I refuse to test this. I am relegating this to bucket of stuff I'm not going to test. And if this blows up in my face, the correct behavior is for the server to 500 and fall down and die.” That’s a powerful statement. CHUCK: So rather than high rate of change, maybe there's just a problem that’s way too complex, and will require a lot of maintenance. Are there situations like that? DAVID: I like to back off on those. I like to get a level above them, or two levels away, and start writing acceptance test around what it's supposed to do at a higher level. If I'm churning too much in implementation, and my tests are churning implementation about the second or third time, I say, “Okay, this is dumb, I'm doing the same three steps in both the code and in the test. I got duplication. How can I back off and say, ‘What’s really wanted here?’” And now the implementation can churn a little bit, like I you replace a query with a cached query, and that kind of stuff and your acceptance test oesn’t change. CHUCK: Mh-hm. That makes sense. RYAN: I have got a question for you guys. It might change the subject a little bit here, but I know this is called the Ruby Rogues podcast, but what about JavaScript? That happens to be an area that I suffer in testing a lot is the JavaScript code. I was wondering if you guys sort of had the same problem. I've been working on that more recently, but I want to get you guy’s feedback on it. Sorry to cut you off there, James. JAMES: No problem. DAVID: I love Jasmine and Vows for testing JavaScript and CoffeeScript. CHUCK: Jasmine and Vows? I've heard of Jasmine. What's the other one? DAVID: Vows. So Jasmine’s is Rspec, and Vow’s is Cucumber. CHUCK: Okay. DAVID: And Cucumber is good for you. [Laughter] RYAN: Something I've been looking at recently is Jasmine Headless Webkit. That's what the project is called. It works with Guard great, and it's sort of a nice way to run Jasmine without having to open a browser. DAVID: Zombie is that way with Vows as well. Zombie is a headless… and it's the new George Romero zombie, it's supposed to be fast and headless. [Laughter] JOSH: Don’t headless zombies crumble into dust or something? JAMES: It's complicated. DAVID: I don’t know. All I know is I'm writing all my code from the rooftop of a shopping mall now. JOSH: Okay, great. CHUCK: [Chuckles] JOSH: Let’s see, I had another myth type thing to address, and that’s “one-shot code.” CHUCK: One-shot code? JOSH: Yeah, like if you have something… let’s say you got a big csv file that you need to import, and you only ever have to import once. So, “Oh, I don’t need to test drive that. I don’t need to tests for that. I just need to write the code and make sure it works.” DAVID: Can I be your audience [inaudible], Josh? JOSH: Yeah, please do. DAVID: That’s wrong! You should test that. Tell them why! [Laughter] JOSH: So first off, the myth of the one shot code is that you are only ever going to run it once. And because first off, you run it in development, and then you run it in staging, run it in production, and then you have to run it in production again, because somebody finds another file that you have to import. So I think that one shot code is very rarely ever one shot. JAMES: I have a rule of thumb I for that. If I can do it in IRB, then it's fine. So if somebody writes in and they’ve forgotten their password and they just want me to reset it to something, I go into IRB or script console, I guess in this case and call up data attribute, instead of something stupid, you know, or whatever. Then I'm fine with that. If it's something I would be uncomfortable doing in IRB, then it's wrong. So the example you give where you have to do that csv file. I actually had a client that beat this out of me, because they would write me and say, “I need x statistics from our database.” And they would explain what they wanted. And I would go in and i would generate it, and that would spit it out. I would do it in IRB and you know, they were long and messy, but that’s okay. I´d spit it out, put it in a, download it, send it off to them – and then I break my connection. And I swear five minutes after I disconnected from the server, he would write this back and say, “You know what, now that I've seen that, I would like to see this, which is almost like that, but with this.” CHUCK: “Can you tweak it a little?” JOSH: [Chuckles] Yeah. CHUCK: One shot code, you are only going to run it once until you need to run it again. DAVID: I have the one shot story from hell. And Chuck, you were there for this. Do you remember the migratrix? CHUCK: Uh-huh. DAVID: So we had the legacy site, and the data site, and the database format change. And this is another myth by the way, “ you  don’t have to test migrations.” That’s the myth if the migration is weird and crazy and can probably break. And we had a whole bunch of legacy code that had to be seriously transformed and restitched together in the data site. And so I knew going in that it was too complicated to write without a test, so I wrote all these tests. I´ll make a long story short, and basically say, by the time the smoke cleared, this thing was running every hour, pulling code because the beta site and the legacy site, they changed their minds, there was no cut over, the two sites run in parallel for over six months, and this migratrix thing had to run like every hour, pulling over new data out of the legacy database, migrating it and putting it into the new database. And yeah, I still wake up screaming. CHUCK: Yeah, it was beyond complex. I don’t think you need test things like create table, or whatever, but yeah, if you are dealing with anything that goes beyond just the standard four or five things you are going to do in the migration, you definitely need to test them. And if you are doing any kind of data manipulation, you need to test that too. DAVID: Or normalizing, denormalizing yeah. JOSH: So one little tip that I´ll give -- a little pro tip -- is that if you do a migration, even if it's really simple stuff, the best thing to do after you do rake db migrate, is do rake db migrate redo, because that is typically the only time the down migration will ever get run, unless you actually need to do it somewhere. And that’s the only time you'll ever get to exercise it before you actually need it in production somewhere. So that will patch typos and all that stuff. DAVID: I very briefly forced myself to you do rake db migrate redo or rake db redo, and then rake db migrate version = 0 and tear the whole thing all the way down, and then rake db migrate up. I have since stopped doing that because it taught me some really good lessons about various things but this was back in the rake 1 days, when we really didn’t have a good concept of seeding data, and I don’t know if I will still do that anymore, but it's a fun exercise; you should try it and see how it changes your migrations. CHUCK: It's interesting because most of the time, you are going to migrate up and so you are just like, you know, it works, I run db migrate and then you know, all my tests pass, and the database is in the same state, but the one time that you have to roll it back, or if you have to roll it back several versions and make sure that all of the data is lined up the way it was, and that everything was behaving the way you expect it to, you can really get into some ugly places, because your down migrations don’t do exactly what you thought they would do. DAVID: Yeah, and if you only test one side of it, make sure that you can go from version zero all the way up to up and running, because the next developer on your team, that’s going to be part of his setup practice. CHUCK: Yeah. JAMES: Although just to say, a lot of people have gotten away from bringing new databases up like that, they tend to use rake’s db schema; just load db schema… to put the schema back in and go forward from there. JOSH: Yeah, the argument against that that I've heard is that, in CI, you always wanna run all your migrations because that’s often the only time when they get exercised. So if you ever need to run like that. But I like your approach of just load the schema. CHUCK: Okay, Dave has to leave in like ten minutes, so I'm going to let him do his picks real quick, because it sounds like people still have stuff to talk about. DAVID: I wish I could stay longer. This has been an awesome conversation. CHUCK: So we'll let him go; we'll kind of chat for a little bit longer, and then we'll have everyone else do their picks. DAVID: Okay, so my pick for this week is a wonderful new gem called Pry. I'm kidding. Because in the backchannel, all of us wanted to pick Pry this week, didn't we? Somebody else can pick that. Somebody else called dibs on that. I have a really weird pick -- as usual. I may have picked this before, but Moleskine notebooks, these are incredibly hipster… well, they are not hipster because you probably have heard of them, but they are very pretentious; they are very expensive, $20 for like a 200-page notebook. It's very, very expensive. But the paper is extremely heavy weight; you can write on them with magic markers, it doesn’t bleed through the other side. And they are small enough that you can stick them in like the back pocket of your jeans or your cargo pants and carry them around. And I'm finding that out of all the journals that I have ever kept over the years, this is the easiest journal for me to just always have with me, and always just pull it out and just write something down. And it's very easy to capture. I find more of my thoughts are being captured in this notebook, than any other type before. So there you go. I guess it's a product plug. I don’t need it to be. You don’t have to pick Moleskine, especially if you are anti-hipster and you or anti trend. But in that case, my pick is a notebook. You need some kind of external RAM. A short pencil is better than a long memory. And anything that will get you writing your stuff down is good for you. And it doesn’t have to be high ceremony, you should pull out your notebook and write scribbles in it, do stupid doodles, have arguments with people in it, don’t just save it for very precise engineering drawings and your long, drawn out news and what not, no, just something that gets you, if you are wiling to whip it out, write something down, it's perfect. JOSH: Did they use to call these things day books? This is actually a fairly common practice, and some of the great thinkers of many ages kept this sort of daily journals. DAVID: Yes. JOSH: And it's really great to go back and read just like their daily thoughts about stuff. DAVID: Yeah, the marketing blurb  around Moleskine  is that they were based off a type of journal that were very popular in Europe in the late 1800s, early 1900 that were like in hand stitched and only artists carry them and used them, yeah. CHUCK: I like the implication that David is a great thinker, because the other episode, he said that all of his best coworkers were the ones that had engineering degrees, and that made me feel really good. So he's a great thinker and his best coworkers had engineering degrees, then I'm up there. DAVID: There you go, exactly. CHUCK: [Chuckles] DAVID: The most important thing, the most astonishing thing for me is I scribbled a page completely black using sharpie magic marker, and it did not bleed through to the next page. It mottled the back side of the paper, but it did not bleed through to the next page. The paper… you just want to write on this paper. It's so creamy and smooth. It's very nice. And you get a nice pen, too. CHUCK: [Chuckles] DAVID: That's my pick. CHUCK: “Creamy and smooth,” that made me think of something else. DAVID: Mh-hm. I hope it was good. CHUCK: I hope so too. DAVID: Yeah, you got a new baby in the house, so. You know what, I probably ought to bow out you guys, I'm going to lose this room. Ryan, it's been fantastic being on the podcast with you. RYAN: Thank you, Dave. DAVID: You guys are great. JAMES: Good to see you, Dave. Thanks for joining us. CHUCK: All right, well we got about five more minutes to talk, and then we are going to go in to the picks. JAMES: So can I bring up one controversial question? CHUCK: Sure. JAMES: How about, “Is a 100% coverage even a good idea?” CHUCK: Well, I´ll jump in and tackle part of this. I think again it comes back to the discussion of return on investment and trade offs. I mean, if you are spending hours and hours testing code that you don’t need to test, to get that last 2%, then no, it's not a good idea because you are wasting time that you could spend putting in features or this or that. I mean, if you can get there and your code is well enough factored, that writing that 100th percent of the test is not a big deal, then yeah, it could definitely be a good idea, because then, while you don’t necessarily know that 100% works, but atleast 100% gets exercised. But that’s the other thing too is that, we are talking about 100% exercise, not necessarily 100% verification of features or functionality. RYAN: Right. I never liked the word “coverage”, when it just means that the code happen to be executed, because it may not be tested at all.  The way I like to think of it is can I deploy with confidence? Do I have an amount of coverage in my test suite that I can run the tests, and as long as they pass, I can just deploy the application with confidence, then I think that’s good enough coverage for me. And that usually means I have at least close to 100% coverage, in most of the lower level code models and controllers of course for sure. Just to make sure that I will be able to function properly, I like to make the lower level code more fully tested because it [inaudible] in the application. If I'm making a gem and it's striking it out to many different applications, then I wanna make that very well tested; make the model layer very well tested because it's used to many different controllers. The view layers usually only execute it in one specific place, so sometimes I can get by just testing that manually. But the lower level it is, the more I like to test it. CHUCK: So you like to have 100% code confidence, not necessarily code coverage? RYAN: Yeah. That’s a good way to put it. JOSH: So I wanna build on that, Ryan. Because I think that’s a great point. There's some great examples in… I hate to pick on Rails, well, actually I don’t hate to pick on Rails, [Chuckles] but if you look at say the migrations code in Active Record, there's some pretty crufty code in there, although Aaron has been doing a great job of cleaning that stuff up. But it got that way, because there wasn’t a good test coverage of it. And somebody made one of those calls early on saying, “Oh, this is too hard to test this stuff. I'm not going to test it.” And so they started accreting all of these crufty code that because it was built on a foundation that wasn’t tested, kept getting harder and harder to test. And as time went on, it turned into something that became incredibly difficult to test. Everything was on the class side of things, class reloading was a huge problem. So if you start off with the foundation that’s not tested and you are trying to build on it, you are just asking for trouble. I feel like I'm channeling David now. [Laughter] JAMES: He's still here. [Laughter] JOSH: So if you have a piece of code that for one reason or another, you skimped on the testing of it, and now you are going to come back to it and work on and expand it, I think you really need to reevaluate the amount of testing that you have done on it. And maybe you want to stop, write some tests, before you go and start expanding on the code. CHUCK: So, is untestability then a code smell? Or is it an indication of a problem? Not always? Usually? JOSH: I think the code that is difficult to test, ends up being code that is difficult to refactor, a code that is difficult to expand on, to extract and modularize. So it is kind of a meta code smell. JAMES: Yeah, I think it is. I think if have something that’s difficult to test, you have to ask yourself why. I mean, testing is just using code, right? So if you are saying something is difficult to test, you are saying it's difficult to use, right? And that’s probably not a good thing. It probably means that there are some extracting that can be done there. It can be made more modular and stuff like that. JOSH: James, you should really give classes in boiling everything down to a simple explanation. JAMES: Thank you. [Chuckles] JOSH: It sounds great. RYAN: On the topic of coverage, I really like tools like SimpleCov though, because it makes me more aware I guess of what is exactly my tests is hitting. And sometimes I do find that, “Wow, this code is not executed at all.” And I could have, I could just mistype this method completely, make an obviously large bug here, and my test will still pass. And so, for that reason, I do like to use SimpleCov. I don’t rely on it completely, but I think testing your code coverage in that way is a good thing to check, just at least to be aware of. JAMES: I think that's a good point. The only problem I have with tools like that, there's kind of been some discussions lately about how we talk the big test game in Rails and then truth is that not as many of us are testing as we think. And it may be just the tester is kind of a local minority, but the point is, I think we make it kind of intimidating to non-testers. And I think some aspect of that is when we handle tools like SimpleCov and say, “See, you got to get this all the way there.” And I think that’s part of where we go wrong. I agree with Ryan; it's a totally useful tool and we use it and get feedback from it, and I think, I like to think of 100% coverage as the lighthouse on the horizon. I like to be able to  see it it; I like to know the lights there and which way I'm headed, but I don’t ever really expect to get there. As long as I can still see the light, I'm close enough. JOSH: A developer’s reach should exceed his grasp, else what's a test suite for? JAMES: Exactly. RYAN: Nicely put, Josh. CHUCK : I'm going to mistype that, so it's not going in the show notes. JAMES: [Chuckles] CHUCK: All right, well I think we’re just about out of time. The funny thing I wanna point out to the audience is that we said, “Well, what not to test is probably not a big enough topic to take up an entire episode.” And so we were going to talk about what not to test and then factories, and then mocking because that would fill up the hour. So we'll have to talk about factories and mocking another time. You know, I just wanna thank you guys for coming on, and we'll get into the picks. For people who are new to the podcast, a pick is just something that we’ve used that we like, or something that we wanna recommend that makes life better. A lot of times, they are code related, but not always. They can be toys, or games or whatever. So we're going to go ahead and do that, and then we'll wrap this up. I´ll go ahead and let Josh go first. JOSH: Okay, thank you. So David, on his way out, sort [Chuckles] gotten a little dip from my pick. My pick is a gem called Pry. And everybody loves this gem, so you probably already… so the listeners have probably already heard about this thing now. But I have a couple of things to say about it because it's so cool. At one level, it's a replacement for IRB. So it’s a read, evaluate, print, loop program. It looks like IRB to some extent; you can type Ruby expressions in it and it will evaluate them and print the results. But it's also kind of like its own shell or console, where it has a command set that it comes with, and it’s extensible; you can add plugins and add new types of commands to it. It does things like you can cd into a class or an object, and then future plans that you type will execute within the context of that class or object. And then you'd type ls to find a methods and things like that. So that’s pretty cool. I think it's going to end up being an important addition to the Ruby ecosystem. And it’s a shame, David is not here to go all excited about Smalltalk again, but I'm going to say, Smalltalk had this really great feature called the work space, where you would just type in Smalltalk code, select it and type execute, or print it or do it. do it with execute it, print it with execute it and then print the results. And you could just do anything in there. And a lot of code was developed that way. Like James said, if you can type it in IRB, it's pretty cool; you don’t need to test or whatever. But that ended up being our test suite side of things. And the Smalltalk debugger actually included a workspace that we set in the context of the stack frame where you were stopped. So I think that the foundation that Pry is right now, is really awesome. You should go to Pry.github.com take a look at it, there's a really good screencast that’s about 15 minutes or so. It's well-worth watching. So what's here in the app today is awesome. I'm really looking forward to seeing people build some stuff on top of it, or to innovate with it, both through the extensible plugin command system. But also building a new context around it. I can imagine a tool in Textmate that is rather than being a command line interface to the Pry functionality, having it being something that just sits in a text window, and you can select things in the equivalent of do it or print it from in the Smalltalk workspace. So that’s my first pick. CHUCK: All right. JOSH: My second pick is completely off the wall, and really kind of sabby or smarmy, but my pick is showing appreciation for your friends. [Chuckles] I was having a really crappy day yesterday. I had some really bad interactions with people that left me very frustrated and in a piss off, aggressive mood. And I just kind of looked on my screen and I saw some friends who were logged on in I'm, and I just picked one and just kind of randomly said, “Hey, I just wanna let you know how much I appreciate that you are my friend.” And he was like, “Oh wow, that’s really awesome.” And it completely changed the character of the rest of my day. And I ended up having one of the best days I've had in a really long time, just because I got out of my, “God, I hate world!” frame of mind and you know, said something nice to somebody, made his day, that made the rest of my day just kept getting better. RYAN: That’s awesome. JOSH: One of the things I've learned is that, if you are having a bad day, the best way to improve your day is make somebody else’s day better. CHUCK: Nice. All right, I wanna pick on Pry for a minute there. Didn’t you do a RailsCasts on that recently, Ryan? RYAN: I did. The last one, actually. The most recent one. CHUCK: All right. RYAN: It's awesome, I love it. CHUCK: Yeah, so go to pry.github.com, but also go check out the RailsCasts on it. All right, James, go ahead. JAMES: So Chuck kind of stole my thunder, but I was just going to say that there are two ways people find out about things in the Ruby community, generally speaking; one is through Peter’s Ruby Weekly which I'm sure we've discussed with Peter on a bunch in the past, and I know we've plugged it before. It's a great newsletter. The other way people find out about things in the Ruby community is watching Ryan’s weekly RailsCasts episodes. And both of those this week focus on Pry, so that’s why the Ruby community is currently obsessed with Pry. And I think it's great. RYAN: I just steal my ideas from Ruby Weekly. CHUCK: [Chuckles] JAMES: [Chuckles] That awesome. Actually, I think yours came out first, so maybe he has to claim it the other way around. But I wanted to say that Josh is telling us to show appreciation for our friends, so I'm actually going to show some of my appreciation for Ryan Bates. He has 280 RailsCasts episodes, and I wrote a small script today – it wasn’t tested, I'm going to confess – but… CHUCK: [Gasps] Noooo. JAMES: I know. But it comb through his site to calculate how much time there is in the RailsCasts episode available. And it turns out it's over 41 hours now, well over. And so, you are talking about two days, you could just sit there and watch Ryan Bates teach you things about Rails. That’s awesome. It’s a massive knowledge collection in our community, and like a go-to source. He really got it all. He shows different libraries and things that are handy, he shows just plain techniques. And while it's all Rails focused, you get to see him program and you get to see plenty of things. I actually recommended it to somebody I've been working with recently, that’s still coming up and trying to get good in Ruby, and “Just go watch some RailsCasts.” I mean, you can watch Ryan do the right thing for like 40+ hours. It's really great. So I have to give massive props to RailsCasts. If you haven’t watched them, you need to go back and do it. Every time there's new release of Rails, he goes back and shows all the massive new features in the series of episodes. And so it's like, if you watch RailsCasts, you are always up on what's going on, and what tools people are using, and he'll often show multiple tools for doing the same thing, I'm like I'm thinking of background processing, he showed several ways to do that with using all kinds of different tools. So RailsCasts as a resource. I can't recommend it enough. That’s my pick. RYAN: Thank you, James. I appreciate it. CHUCK: Yeah, I think we all appreciate it, Ryan, to be honest. All right, so I'm going to get in to my picks. There are couple of things that I've been doing stuff with lately. Most of it, I just spent at the hospital the past couple of days because we had that baby. There are few apps that I've been using that I've really been benefiting from and enjoying. One of them is harvestapp.com, and it's what I use now for all of my time tracking and invoicing. And it's so easy to use, that I've just, it literally has saved me a couple of hours every week, just keeping track of things and putting everything together, and copying the time numbers over to QuickBooks so that I can do the invoicing through there. And it's just been really nice. So that’s the first one is just harvestapp.com. Another one, while I was at the hospital, actually there were two things that were kind of life savers. My wife wanted me to be there, because she gets bored in the hospital and since I'm self-employed, I can just do that. I can just be at the hospital. But while she was asleep, because having a baby is hard work, i would spent a lot of time watching movies on my iPad with the Netflix app, and so another pick is just the Netflix app on the iPad. And the last one, the one that helped me kind of relax when I was tired and needed to sleep is actually audible.com. And I just downloaded the Hunger Games and listened to the first six or seven chapters. I've read them before, but it was nice because I could just lay there, I didn't have to look at the book, I could just close my eyes and kind of drift in and out, as the book was read to me. So I have three picks and that’s what they are. And we'll let Ryan go ahead and do his picks. RYAN: All right, my pick is the board game Go, not to be confused with the programming language Go. This is an ancient board game. It's thousands of years old, but it's pretty easy to learn. It's hard to master. It's a little bit tricky to get into it, because the first few hours is just been trying to lean and play this game, probably not very enjoyable. But the more you play it, the more you realize how much depth there is into it. And one reason I recommend its and I like it so much is because I see a lot of parallel between this and writing code and programming. Because for one thing there's a certain beauty in the moves that you play in Go, and once you learn the game and understand it, you can sort of recognize a good move by it's beauty and that’s something that’s sort of hard to scientifically put a label on, “Okay this is a beautiful move.” But you recognize it and I see that in coding a lot too. You know there's a good code here because it's beautiful, and understandable and easy to read. And another reason I like it is because it exercise my brain in a way that programming benefits as well and that is just looking at all the possible moves that I can do, and every single move and finding the best solution every time. And sort of doing that brain exercise over and over again, I find helps me in coding a lot because it is just [inaudible] problem solving. So that’s my first suggestion, the game of Go.  If you wanna find out more about it, I have a site called goversusgo.com. Actually, James helped me out on on the Rails Rumble. If you go to goversusgo.com/resources, I have a whole list of resources there that you can learn to get started on it. And if you wanna play with me, I'm on the dragongoserver.net, my name is Ryan Bates or rbates is what my username is. That’s my pick. And for my more technical pick, I’d like to suggest the gem Foreman. I recently asked on Twitter, “Okay, I have a lot of background processes that I need to start up for my Rails app to get it running in development,” and the gem Foreman makes it really easy to do. So you just list your process and then run the command and starts them all up for you. It makes it really easy if you have a lot of background processes like some workers or maybe sphinx server or  sunspot server and so on. So, really nice. CHUCK: All right, well thanks so much for those. I've heard a little bit about Foreman. Sounds like fun, and you know, I remember I was competing in that Rails Rumble. You and James, you were like in one of the top ten apps for the Rails Rumble or something like that. I don’t remember exactly how you ranked, but I was impressed. All right, I'm going to go ahead and start the music, and thank everybody for coming on to the show. David had to leave early, but we'll thank him anyway. And also on our panel, we had in no particular order, James Edward Gray, JAMES: Thanks, everybody. Josh, we love you too. And Ryan, thanks for joining us this week. CHUCK: Josh Susser. JOSH: Hey guys, thanks you are the best. CHUCK: Ryan Bates. RYAN: I appreciate it, guys. Thank you so much for inviting me. It's been a blast. CHUCK: And I'm Charles Max Wood. And just wanna remind you, you can get the show notes by going on to rubyrogues.com. You can also subscribe to the podcast in iTunes by just looking us up, Ruby Rogues, it will pull it right up. I've been getting a lot of feedback from people saying that they like the show, giving me suggestions. I wanna let everybody know that I am reading those, and I'm trying to do what I can to make the show into what you guys want. So keep them coming, the feedback or even just “Hey, I like it,” is really nice. And one other thing is that if you have some idea, go and suggest a topic, or you can be a little be more public about it and suggest something on Twitter, so people can vote on it there. And that’s kind of how this came about; Ryan suggested a topic and he kind of got his wish and he got invited to join us. So if you have some ideas, we're definitely open to those, and we are considering things both on the forum, and things that you send to us. I think that’s about it. Go ahead and leave us a review on iTunes and we will catch you next week, thanks!

Sign up for the Newsletter

Join our newsletter and get updates in your inbox. We won’t spam you and we respect your privacy.