The Ruby Freelancers Show 058 – Convincing Clients of the Value of Testing, Refactoring, Documentation, etc.

00:00
Download MP3

Panel Eric Davis (twitter github blog) Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up) Discussion 01:13 - Is there value in testing and refactoring? Client Value vs Developer Value Unit Tests Acceptance Tests 09:22 - Saving time and money Better Code Maintainability 13:45 - When not doing tests hurts 14:39 - Refactoring “Changing Shit”/Restructuring 19:48 - Approaching restructuring “Leave it better than you found it” Use metrics Coding “taste” and style 30:36 - Software exists to provide value 32:33 - Documentation 34:45 - Getting clients to see the value in tests Characterization Tests 41:04 - Deployment 42:57 - Client Dictatorship Using specific libraries or tools or databases The Trust Factor 054 – Red Flags with Potential or Current Clients with Ashe Dryden Draconian technology Picks Software Engineers Spend Lots of Time Not Building Software (Eric) Status Board (Eric) Backbone.js (Chuck) Toy Story: Smash It! (Chuck) Next Week Overcoming Burnout Transcript [Hosting and bandwidth provided by the Blue Box Group. Check them out at bluebox.net] CHUCK: Hey everybody and welcome to Episode 58 of the Ruby Freelancers Show! This week on our panel, we have Eric Davis. ERIC: Hello! CHUCK: I'm Charles Max Wood from devchat.tv. It looks like everybody else is busy, so it's just going to be us this week! So how's it going, Eric? ERIC: I'm not that busy. I had a busy week last week; and basically, I actually took yesterday off. So, I'm just starting to kind of get back into things again so I actually have a bunch of time. CHUCK: Yeah. I just picked up another contract so things are starting to ramp up for me again. And then I'm trying to get all these stuff together for a few other things I've got going on. But yeah, there's always something to do. ERIC: Yeah. The hamster world never stops spinning. CHUCK: Yup. I'm seriously thinking about going and taking a nap after this instead of working, though. So this week, we're going to be talking about "How to Convince Clients of the Value of Test and Refactoring" and things like that. I guess we should talk about the premise really quickly of "Is there value in Test and Refactoring and stuff?" ERIC: Yeah. I don't know, I don't agree with a lot of the kind of popular opinion about bits on how you test everything and all that stuff. I'm very -- pragmatic is a good way of raising about it, I'm not sure -- I think there's value there, but I don't think there's as much value as a lot of people place in it. And especially if you consider it from the client's perspective, there might not be as much value in testing and refactoring, all that stuff, as developers put into it. CHUCK: Okay. What value do you see in tests? ERIC: For client, the value that they're going to get out of test is going to be "Regression Type Test", like this bug occurred, it was fixed; there's a test to prove it's never going to come back. There's also value in kind of like "End-to-End Test" - some people call that "Exception Test" or "Integration Test" where the system works from point A to point C going through point B. That's where the clients are hopefully really involved with like writing the test, how it's going to flow, and all that stuff. In those cases, I think there's a lot of value for the client. Depending on the client and the type of software you're building, they might not want to put a lot of time into it. I've had couple of clients where they are too busy to actually write this Sentence test, and so I would write them based on what the client would tell me, and then as we iterate on it like the workflow, we would tweak or understand the system and I would take that understanding and write the test for them. On the other hand, "Unit Test" I think for a client, there's very little value for them directly in unit test. I think,

Transcript

[Hosting and bandwidth provided by the Blue Box Group. Check them out at bluebox.net] CHUCK: Hey everybody and welcome to Episode 58 of the Ruby Freelancers Show! This week on our panel, we have Eric Davis. ERIC: Hello! CHUCK: I'm Charles Max Wood from devchat.tv. It looks like everybody else is busy, so it's just going to be us this week! So how's it going, Eric? ERIC: I'm not that busy. I had a busy week last week; and basically, I actually took yesterday off. So, I'm just starting to kind of get back into things again so I actually have a bunch of time. CHUCK: Yeah. I just picked up another contract so things are starting to ramp up for me again. And then I'm trying to get all these stuff together for a few other things I've got going on. But yeah, there's always something to do. ERIC: Yeah. The hamster world never stops spinning. CHUCK: Yup. I'm seriously thinking about going and taking a nap after this instead of working, though. So this week, we're going to be talking about "How to Convince Clients of the Value of Test and Refactoring" and things like that. I guess we should talk about the premise really quickly of "Is there value in Test and Refactoring and stuff?" ERIC: Yeah. I don't know, I don't agree with a lot of the kind of popular opinion about bits on how you test everything and all that stuff. I'm very -- pragmatic is a good way of raising about it, I'm not sure -- I think there's value there, but I don't think there's as much value as a lot of people place in it. And especially if you consider it from the client's perspective, there might not be as much value in testing and refactoring, all that stuff, as developers put into it. CHUCK: Okay. What value do you see in tests? ERIC: For client, the value that they're going to get out of test is going to be "Regression Type Test", like this bug occurred, it was fixed; there's a test to prove it's never going to come back. There's also value in kind of like "End-to-End Test" - some people call that "Exception Test" or "Integration Test" where the system works from point A to point C going through point B. That's where the clients are hopefully really involved with like writing the test, how it's going to flow, and all that stuff. In those cases, I think there's a lot of value for the client. Depending on the client and the type of software you're building, they might not want to put a lot of time into it. I've had couple of clients where they are too busy to actually write this Sentence test, and so I would write them based on what the client would tell me, and then as we iterate on it like the workflow, we would tweak or understand the system and I would take that understanding and write the test for them. On the other hand, "Unit Test" I think for a client, there's very little value for them directly in unit test. I think, almost all of the value in unit test are for the developer - and that's for thinking through the design and coming out with a good design. I'm not saying unit test are worthless because they're actually really great for starting up a simple design, iterating on it, and getting to a working result, which that has value. I just think the actual unit test themselves as a deliverable are almost worthless. CHUCK: That's really interesting. So, I'm going to take kind of in a different approach. Most of the tests that I write are unit tests. I sometimes write the "Acceptance Tests", but it really has to be something that's kind of complicated from top to bottom. I don't test everything front to back in the sense that they don't write unit test. For example, on Ruby on Rails app, I typically don't write unit test for my controllers or my views; I just don't see the value there. For the most part, the view is just a matter of "Is it showing the correct data in the right place?" and you can get that out of a unit test by unit testing the method that's going to give it the data. The controllers, I tried to keep that as simple as possible so that it really is just a matter of "Does it get the right values?" And if you're doing that, then again you can cover that by unit test on your models. I like to do the unit test and the reason is because as I move forward, then I could validate all of the business logic, which is typically the more complicated part, and I feel like that's really where the value is. And then as I'm working through things later on, if I either come in the conflict with some things -- I'm adding another feature and I can't finish that feature without breaking an existing spec or test -- then I can take that back to the client and say "Hey look, we specified that this will work this way, but I can't do this and this other thing and make them work the same way...I could give you some options". The other thing is this, sometimes I'm touching stuff and they don't realize that I'm breaking stuff and so the unit test come in handy there as well. And so it's really just a maintenance thing for that as far as maintaining the working system. I guess that is kind of a "Regression Suite", but it's not a set of acceptance test that tell you that it works front to back. ERIC: I would actually consider the way you're talking about unit tests; that's basically going to help you design. I mean you have this (I'm going to call it) your legacy unit test that talks about how the system works, that's helping you figure out how to design this new piece of functionality. When you're using unit test, like foo or just a TDD cycle where you're writing unit test on foo widget and you're writing a code for foo widget, so basically, you're iterating your design back and forth between the two. Well, foo widget integrates with bar widget and baz widget. If you have unit test for that, those unit test should also help drive the design of foo widget. So, it's a cycle. I think -- I'm not saying you want to throw away your unit test or that you shouldn't do it as a developer, I'm saying, the value for that doesn't really trickle-up to the client directly and it's very indirect. I mean you're talking about the value of the software design. Another thing is, I want to mention because Rails -- I'm not saying Rails get this wrong, but it's very hard to get it right -- you're acceptance test aren't basically like always "I'm in this UI, and I'm doing something" or full-stack high level stuff. Sometimes, the business rules and business logic are a user [inaudible] provided password with digits and characters and a special character. That's a business rule; you might write that in a unit test, but to me, I actually consider that an acceptance test. Now, the problem with the way most Ruby testing frameworks and stuff is that, that acceptance test is usually clumped alongside unit test and so it's very hard to differentiate the two. But, I consider that acceptance test because it comes from the business side; it actually dictates how the system has to work, and if you break that, it basically breaks the business requirement versus regular unit test are kind of the more internal like how stuff is working. CHUCK: Yeah, I can kind of see that. Most of my -- I try as much as I can to make my unit test more black-box unless white-box. So I'm usually not stubbing things out or saying "I'm calling these different things". All I'm really saying is "When I put this value in, I get these values back and I get whatever side-effects I expect". Yeah, it's kind of the same thing. It's just validating, basically what you said, validating a business rule that I've received from the client. But yeah, I don't usually see a lot of value with testing further app in the models, unless I'm doing something really complicated or different from the way I usually approach things. And even that, I'll usually extract that out into its own class; it's not canonical model. In other words, it doesn't inherit from the Rails libraries that provide models and it'll still encapsulate logic and I can still test it. And so, I can kind of see that we're sort of talking about the same thing. ERIC: Yeah. And I mean I just want to make distinction clear between the 'value to the business', the 'value to the developer' especially because I've done a lot of recent work on kind of like API stuff like there's a JavaScript side of the app and then there's a Rails side of the app. So on the Rails side, we kind of have an API that were saying "We're going to agree to respond in certain ways to different inputs so the JavaScript can be developed at the same time". In that case, there's a lot of value in testing the API. So we actually have, not a lot, but we have pretty significant amount of unit test for our controllers because the controllers what's interacting with that and it's not actually like model behavior. And so it's not, like you said, you can't extract it out to another service object or some other class that's not a model. But even in that case, like those are still unit test, the only thing the business is getting out of this is, an API that behaves nicely, which is good! It's a good thing and it's a good thing to have, but we have to have an acceptance test to kind of take to the client to say "Here's what the API can do, here's the capabilities of it, and that's a higher level test that we have". CHUCK: One thing that I want to talk about really quickly because you keep bringing up the difference between the business value and the value to the developer, and to some degree, I think the two overlap. ERIC: Oh, yeah! CHUCK: So, having those unit test that do benefit me, in a lot of cases mean that I can just blaze add, and know that I'm going to get feedback if I break some. So, it can wind up saving them some time and money - some of my time, some of their money if I'm billing hourly. Things like that, or if another developer comes in and there's a functional test suite on there, then they can come in, they can run over test, and they can just plow ahead. And so, I think there really is some value there beyond just giving feedback improving that the business or that the application works the way the business dictated that it should. ERIC: Yeah. And what I tell my clients -- because the whole topic of this is 'how you convince the client of the value', and in this case, of test -- I don't! I say "Look, I'm going to have basically your business requirements coded into some kind of acceptance test or acceptance test (some suite or something) that you can run the proof that the software works in the way you want it to work". And I say "As part of the way I develop, I have to write unit test for my code". And I tell them my experience: I've done it without writing test before, I've been on projects for the test or [inaudible] or they're not that good, and every time, the project is derailed because you don't have regressions or the designs is not as good as it could be, or they bring in new developers and it takes 3 weeks for them to get up to speed, or various other reasons that I've had in my experience. And so I tell them like "Look, I'm going to write this unit test, but these aren't a deliverable; these aren't something which is going to give to you. You can be confident that they're going to happen concurrently with the actual software, but this is not like something I put in the contract as deliverable file of unit test." And I tell them like "That's not up for negotiation". I've had a client that said like "Well, you don't need to write tests, it's going to be simple", and I said "Then, I will not do this project with you because that's how I develop". And eventually, we've talked through it and figured it out that they were more concerned about me wasting time on it and then so I actually educated them of "Look, I can write it without test and it'll take me 5 hours; I can write it with test and it'll take me 2 hours and it'll be better". Once they understood that the way my development goes and the way how I write test as part of it, they were like "Okay, that's a better deal for us". But it's still wasn't an actual deliverable; it was kind of just almost a byproduct of how I work. CHUCK: Yeah, that's more or less where I'm at. I don't know if you weren't explicit about whether or not you do test first, but I do and it's the same thing. It's just "Look, if I write the test first, then you get 2 things out of it. One is that, I'm working on a process that I'm used to, it won't take any longer than me writing it normally, the code will come out better (and all of the benefits that you outlined). The other benefit is that, as I move ahead, I know that everything works because the test continue to pass and I don't have to worry about the complexities of whether or not I'm breaking things as I move ahead. And so maintainability works out better for you. And then down the road, if somebody else has to pick this code up because I get hit by a bus or what have you, then they can just do that. They can just come in; they can pick up where I left off. All they have to do is run the test to make sure everything still passes, they know that my code does what it says it does, and they can move ahead." That's usually it. But yeah, the concern is over the time almost always. "I don't want you wasting time writing tests". And yeah, I just have to explain to them "Look, that's the way I work and these are the benefits I think you're going to get from it”. But ultimately, it's going to take me twice as long to figure out how to do it if I'm not using the test as a mechanism for doing the design and verifying my code. ERIC: Yeah. And I found as I've gotten more experienced -- I've been doing TDD or testing of maybe after on some projects, I've been doing that (Oh God! I don't know how many years), I mean as long as I've been doing Ruby, every time I don't write test, it always hurts like it's either hurt and painful while I'm doing it or when I come back to it, and I look at the design and like "Why didn't I [inaudible] these design decisions?" But, I found it hurts way more if you don't have kind of the high level acceptance stuff. I've had some where I, I don't know why, I got really heavy into mocking and mock everything up, I've had a bunch of unit test, but I never wrote high level test. So when I came back to the project 6 months later, it was a mess. Thankfully, it was my own internal project and so it was my own mess to clean up; it wasn't my client's mess. But after that, I basically sat down [inaudible] how I do testing, why I do testing? And that's when I kind of came to this, the high level acceptance test for the client and then you got your lower level, which is just kind of your internal work stuff. CHUCK: Yup. What about refactoring? How do you convince the client that refactoring is worth them paying to do? ERIC: It's kind of the same thing like I look at those hand in hand. I mean refactoring is, you're basically cleaning up a bit of code either before you work on it or right after you worked on it, and that's just part of how I work. It's not a task, but it's kind of like basically another step in the development process. I'm actually talking about the real term of refactoring, not basically what some people call refactoring, which is actually pre-writing and throwing out and starting from scratch. But actually, little things here and there not changing the way something behaves externally, but changing its internals. And that it's just, I just kind of -- whenever I give estimates, I always group in time for refactoring, time to do any acceptance test, time to do unit test, and then like we talk about on other should - my best guess plus padding and what [inaudible]. So refactoring, it's just whatever step five of my process. And so I always have some in there; sometimes I can do a little more, sometimes I can do a little less. But, nice about refactoring is, if you can't do it all at once, you can always come back later and work on it when you're in the same area. And so I just tell my client, "That's how I work, it's not really an optional component that we can just take away and make the time estimate shorter. It has to get done." CHUCK: Yeah. One thing that I tend to do is, most of the refactoring, I don't even tell the client about, to be perfectly honest. It's usually something small, I'm in there working on that code anyway, I re-arrange it so that it works better for me in the current context. Even if I'm taking 5 or 10 minutes to extract the class or extract the method or just make it so that it's generally more readable or more workable one way or the other and getting some characterization test around it in some cases, I usually don't spend more than 10 or 15 minutes at a time doing any given refactoring, and it's usually just enough to make the code reasonable to work with. And then if there is some kind of major refactoring where I'm looking at, I'm going, "Look, there is some serious structural concerns here". And I've seen a few projects where I went to them and said "We're not going to change the way things work me external side, but if we make these changes, we'd just spend 2 or 3 hours making these changes, we'll make it up in a couple of weeks". And explaining to them how, when you're working on code that's reasonable, that's clean, that is elegant in that way really just make sense when you look at it. Working on that code is much more pleasant, but it's also much more efficient. All you really have to do is explain to them where it's going to save the money down the line. Most of the time, the clients are reasonable about that. Some of them don't always understand that they're basically saying, "So you want to basically take some working code and rewrite it? Or, rearrange it?" and they just don't understand the concept there. But in the long run, if you can convince them of the value there and help them understand how it will save them money down the road, most of them will just take your word for it and go forward. ERIC: Yeah. I had the same experience, but to me, that is not refactoring; that's changing things or 'changing shit' is actually the technical term. [Chuck laughs] ERIC: And, it's good! Almost every project can take that. And that's usually because there's been bad decisions or maybe an incorrect decision that was made in the past that caused some kind of trickle through the system and caught stuff to accumulate. I don't consider that refactoring. I'll say like "Look, we need to change this because the design we have is sub-optimal for what we want to do". And I kick it up to them as like "This is (if you use Agile or whatever) the story or this is a new feature or some task", and I lead it up to them like "Look, you can make this change, it'll take 4 hours, but it's going to save us 2 hours on this other thing I have to do next week for sure, and it'll probably save us more time throughout the next few months". And I don't call it refactoring because whenever I see that called refactoring even if I think it's considered different, the client starts to equate that; they're changing everything to refactoring. And then they think like the little 2 or 3 minute like extract method type thing, refactoring are in the same idea, and they're not - they're two different concepts. And so, that's how to kind of get that immediate reaction to "Oh, I don't want you to refactor anything, just write a write the first time". CHUCK: So it's refactoring and then maybe restructuring or something? I don't know -- ERIC: Yeah, I think restructuring is the PC way of saying it, but yeah, I like changing shit; it kind of gets that good emotional response in there. CHUCK: [laughs] Yeah. Many of my clients have been professional enough to where I don't know if I would use that term, but it's definitely interesting. One thing that I do want to ask you about because I'm kind of looking at this kind of situation a little bit and I'm not 100% sure how I want to handle it and so I'm going to ask you what you think and then I'll kind of go off of what I'm thinking of what I'm going to do. Let's say you have a project where you can see that there are some pretty major structural things that you don't think are great idea. For example, maybe you have a really really large core file that contains most of the classes and things that you're dealing with and it's just not well-factored, object-oriented code. I mean if it were just me, I would just go to the client and say "Hey client, I don't think that the way this is structured is a great idea, and here's why..." But, let's say that there's another developer on the project who wrote most of that code. How do you approach that and approach maybe some egos and things that may or may not be there? I'm not really sure if this is the situation on the project that I'm talking about, but how do you do that in such a way to where it's not a personal thing for the other developers who have been working on this? ERIC: That's really hard because you have, at least in your opinion, you have some kind of technical problem -- this object that has a whole bunch of stuff on it, it's not very object-oriented, whatever -- that's a technical problem, and you're bringing into the fact that if there's egos, other developers, then you now have kind of the interpersonal politics. And if you're trying to go to the client, then depending on how the development team set up; you might have the problem of going over someone's head. So it depends on your situation - "This project's different, yada, yada, yada..." Let's see, typically, if I start a project, I try to be very passive; I don't try to push for things like that. I'll notice them and I'll write them down in like a file for the project -- that's sub-private files. So I was like "Okay, what is this on the 15th or whatever; I noticed this class is way too big", and all that stuff. What I'll try to do is I'll try to avoid proposing like a big rewrite of it and I'll just, whenever I have to work in the class, I'll do the Boy Scout role of "Leave it better than you found it". And so, if I have to work in the method, I'll leave the method better; and over time, I'll start to like make kind of helper classes or helper things and start to extract stuff out, too, there. So, you get kind of some of the bad code out of that one into another one, and then hopefully like -- and this is what I basically say -- "I'm very passive". I try to make it so that I'm doing it in such a way that the other developers can see it and can say 'Oh, that's a good idea! Maybe we should do that', and try to let someone else kind of come up with the idea who's been on the project for a while to do the refactoring. CHUCK: Okay. ERIC: Because I found, a lot of times, sometimes the developer just doesn't know better. They might have just -- like the classic example -- they might have put everything in the user class just because the user is the core part of the system, the user does everything, makes sense! And, give it a couple of months and that user class is 5000 lines of code; they might not understand that you can take stuff out, put into service objects, put it into new classes. I've met some developers that didn't know that you can have a class in app models that is not an active record class. And, part of you doing this stuff passively on the side, it will kind of show them like "Look, you can do these things", and it's like education by example. And eventually, sometimes the developers will see it and they'll start to kind of like start to do it on their own and you don't have to even bring it up to the client; it keeps it kind of developer centric. But sometimes, and this happens if you are more on the consultant level -- you can go to the business owner and say like "Look, I'm trying to clean up some of the stuff here, and there's a lot of old decisions in this code. For example, this user class, it's actually causing a lot of pain (and then if you have metrics to back it up)". So of every change, 75% of them have to edit this file, which means that we've had 6 merge complex which caused 18 bugs in production, come at the business owner with how it's actually hurting the business, that can actually get you a bit of way to kind of throw and say "Okay, we're going to redo this". Problem is this, if you're jumping around people's heads and you tell the business owner then dictate to the developers how to develop, you get a lot of the political kickback. So that's why I start passively; and then if it doesn't seem like the developers are getting the clue, I might go up a little bit and say like "Business owner, we need to be faster on this; it's just not working". CHUCK: Yeah. The only real concern...So I was thinking that I would probably -- because it seems like the person that I'm working with on this project, I haven't talked to the other developer yet, but the person that I am dealing with on this project, he seems pretty reasonable, pretty open to my opinion -- but like I said, I really want to be able to work with the other developers on the project. So I've considered actually doing what you suggested; we're working on completely different parts of the code, and so there's not a whole lot of chance that he's going to come into my code and say "Why did you do it that way?" But at the same time, I don't really want to create like two different approaches to solving the same -- ERIC: Like siloing it... CHUCK: Yeah. And so "All the code that Chuck wrote does works this way and it's organized this way; and all the code that Joe developer over here wrote works in a different way", and so anyone coming in to maintain the project now has to understand both paradigms; even though they both work and more or less do the same thing, they've been arranged in a different way. And to be honest, the code is actually really good; it makes sense the way that it flows. But, the organization is very procedural and it doesn't split up their responsibilities well so that you can actually really understand "Okay, here's what we're doing, here's where all this logic is encapsulated, and here are some of the expectations". ERIC: Well if that's the case, you might want to actually just take a little bit and think about it. Is it actually a problem that you have your own taste and this code is not the same taste as yours? It might preferably good code, it might be procedural, but you might have a stronger object-oriented taste. It's a standard viber's [inaudible] tab or some spaces type idea like "Could you leave the code alone and would it be functional? Could you work in it?" because it might just speak on you on a person. CHUCK: Yeah, I thought about that, too. ERIC: Yeah. And one thing -- I don't remember who I got this from -- but, it's similar to the Boy Scout rule. But, whenever you met the contribution to open source code especially that's like your first one or your second one for a project, at third part, you should be able to read the code that was there, your code, and they should not be able to distinguish that two people wrote it. That's kind of the idea. If you have your own style of writing Ruby, you don't want to push that on to the open source project; you want to match their style as much as possible so that whoever's working on the project has the consistent style. And myself, I have adapted that as I gone through different codebases like I like to put my assessors on one line each because I know that's going to show way better and get inversion of control. But on one of my projects I've seen somewhere, they have other assessor and then they list 50 of them, and it's 200 line, long line. Personally, I think that's disgusting. But, that's how the project did it and for me to change it, it was such a, not a culture shock, but it was enough of a taste change that they wouldn't have liked it. So, I adapted my own style to be how they were and I just moved on; it wasn't a battle we're fighting. And so, that's something you have to think about, too, especially if it's a team that's gelled or team that has been going for a while. It could be that they've talked about the stuff and that's the style that they like the most and that works the best for them. So, coming in to try to change it might not be the best thing for the project. CHUCK: Yeah. I told the project manager, I'm not really sure what his role is , but he's kind of acting as the project manager anyway, I basically told him that I wanted to work in the code as it was for a week. So, I'm going to follow the patterns that they are using for a week or so just so that I really understand what they're doing and why they're doing it. Hopefully, that'll also give me an opportunity to work with their developers a little bit. From there, then I can actually start making these calls. But, I brought it up because it was on topic for our discussion here, and I thought that it would be interesting to go into. It should be interesting to see how it all turns out; and I am trying to figure out how much of it is [inaudible] I would have written it completely differently versus whether or not it will actually improve the code both for readability and functionality. ERIC: Yeah. And that kind of goes back to the center programmer thing, like if you look at anyone's code, inevitably hate it, and it's disgusting, and then you realized it was your code from a year ago. I've looked at a bunch of Ruby code from Pocket or people and found that some people have a certain style, and that style is so dramatic that you can -- for a while there, I was able to look at any code and I can say like "Of the 6 developers that I was looking at, I could tell you which one it was just based on like 'this guy is a very Unix C type background', 'this guy is very object-oriented', and 'this guys is very much like writing scripts; just bang it out and get it working'. All of them are great developers; all of them produced really great functional software". That's kind of, if you kind of take it back to the business, like that's the end result; the software functional doesn't do its job. Part of your thing is to make sure that it can keep functioning, it keep doing its job over time, which is the maintainability. That's where some style things could actually make it better, but depending on who's working on it, it might make it worst. So like in your project, if you want to make it your style and say "It appears to be better, could be that", when you're down with the project, the people who are left can't maintain your style and that it might actually decaying become worst, too. CHUCK: Yeah, that is always possible. And it's interesting that this is all coming right back around to what value you're giving to your customer. ERIC: I'm of the idea like software is there to provide value; it's not there to be elegant or pretty or to sit on the pedestals. They're to do something for someone. At the same time, it's still can be elegant and pretty, but that is a byproduct of doing something. You don't design software, at least as a freelancer for a client, you don't design software to actually "Oh, look how elegant this class is!" because a class is worthless if it's not in something that does something. CHUCK: Yup, absolutely. It's kind of funny that you brought up that whenever you look at somebody else's code, it's like "Oh, this is crap!" In fact, a friend of mine and I, several years ago, we kind of have this idea where we're going to put up a code analysis tool, and basically, it would say "You could submit any code from any language and we will analyze it for you. You'd put the code in and you would hit the button, and it would immediately return with this code" is crap. What it was, it would just come back "This code is crap". ERIC: Yeah, and I mean that's the thing. Code, it's written by humans; and humans are not perfect. [Chuck laughs] ERIC: So, you're going to get something that's not perfect, and there's nothing wrong with that. And, the fact that we're not perfect means that you're going to have two things of code that are completely different, but they do the same thing. And so, it's completely subjective, I mean there's nothing about it. The only objective part of it is "Does the code run and do what it's supposed to do?" And even that could be subjective based on who you're talking to. CHUCK: Yup, it's a very interesting conversation. Are there any other things that clients tend to have some problem with accepting the value that you're giving them with similar test or refactoring? ERIC: I had a little bit of push back a couple of times on documentation, and I'm not talking about like the specs and requirements of the system, but more along the lines of like couple of pages and the readme file or going back through after you've done some heavy changes and actually like adding or .com insert stuff to kind of help people understand decisions made. I had a couple of clients that say like "Oh, it's not worth the time. We need to plow ahead; we have all the stuff to do". I think a lot of that just stems from they've never had to go into like a C codebase and try to understand something and haven't seen that "You could stare at a method or on a class for hours and still not get how it works". CHUCK: Yeah, that makes some sense. And documentation, is another one those fuzzy areas to be sure. I mean some clients, they really care about maintainability and visibility and so they like seeing documentation; and then others, not so much. ERIC: Yeah. Luckily with Ruby, you don't have to do it a lot; I mean a readme of like getting started in a project and then the high level concepts. I like to have kind of like an index of like "Hey, are you messing with authentication? Look in this class. Or, are you messing with how...(I'm trying to think of ideas that are in the A-ok)". But it basically like the high level like, "If you're looking for this type of stuff, these are your parts of the code, which makes you want to go look instead of having a dig through at all". I found that, plus like really deep algorithm stuff of like "Why you're using inject versus [inaudible]", that can get you probably 90% of the way transdocumentation. A lot of that actually write us on coding, like a couple of times on the highest step, I'll actually write the documentation of what this next line is for before I actually write and start iterating on the next line. And if I'm lucky, by the time I finished iterating on it, it's struck that to a method that actually defines like the method is the same as the documentation I wrote and so I can even get rid of the documentation. But, you're not writing very much in Ruby; it's very very (I don't know) self-described, and I guess that's the best way to describe it. CHUCK: Yeah. I've picked up several projects that actually include a fair bit of JavaScript as well, and a lot this applies there, too. It's rather interesting -- I need to do better about testing my JavaScript, to be perfectly honest, but when you pick up a project that doesn't have tests, you tend to get a little bit more push back because, again, they're not really keen on you writing test because they haven't valued it in the past. Is there a way that you approach that? Because for me, it really tends to be kind of a hard thing to do and I give them a lot of the same reasons, and most of the time, they'll agree grudgingly just because it's "Well, if that's your workflow, then fine". But if they don't have test, if they haven't really valued that, they don't really see it, and it's already existing code, do you approach that any differently? ERIC: So, you're talking about existing code that mostly functions in going back in writing test for? Or, are you talking about writing test for new code? CHUCK: Both, but in the sense of 'on an existing project with this existing code' that has no test. ERIC: Okay, so the easier case is, like new features and stuff, for that case it come back to my previous statement, "I cannot work on this without writing test. If you want, I can write the test as I do my development, my normal way, and then throw the test away and so no one else will get the benefit of those test; I can do that. I think that's a bad decision on your part on the project. But, that's just how I work. If I don't write test, I'm going to basically be ad hoc testing, going on with the debugger, puts lines, all that stuff anyways, I might as well have a well-structured test written for this new feature". And that leads into the existing code. If there's existing code, there's not a ton of value to go back and add test for it, unless there's problems with that code or you're going into that code as it is. If there's no problems with that code, you can kind of quarantine it and isolate it and hope something doesn't break. To be honest, 99% of the time, something in there will probably break and leak out. When that happens or when you have to go and modify the old code, I'll usually come in with a debugger on one pass and just kind of poke at things. And then on the second pass, I'll actually go like this is like one TDD cycle, or whatever. So, I'll do debugger, see what some of the inputs are, and see how stuff is kind of flowing, and then I'll write a test around it. It's probably the hardest test you can write, when you're going into existing code with new test. But, I use it as kind of an understanding. I don't know how much, but I probably throw away a quarter or maybe even half of those test because I came in with certain assumptions that as I start a test unit and running the code kind of more in isolated and controlled conditions, I found that a lot of the assumptions were false. And they could have been false because maybe I misread the code, maybe the code like a methodname is called 'add' but it's actually 'subtracting', which I have seen... [Chuck laughs] ERIC: Yeah. But the thing is this, that's what I meant about 'when you're doing unit test, they help the developer because they're helping your understanding; helping you get a grasp of how the stuff works. Once you have that grasp, you can kind of put that into unit test and so that's going to kind of put a little bit of value into the project. But, you need to come in with a high level acceptance test to actually say like "Okay, we have an acceptance test of, given this big bowl of code, well for this one input, we get this output". And that might be one of a hundred different inputs they can take, but now, the business has a value documenting and showing that 'with that one input, it works the way it should'. And so, I don't remember the term for it, but you kind of want to try to poke at that thing, get some test here and there as you're working in it. And the ideal case over time, you're going to have half of that thing covered and you can either get rid of it and place it with something that's fully tested; you can just not use either half or you could just kind of let it go and over time, the coverage and the testing of that will go up and go better. CHUCK: Yeah, that's generally my experience, too. I call them "characterization tests", I don't know what the technical term is, they're unit tests that you're basically just asserting that it behaves in a certain way based on your observations. And so you characterize the code, but you haven't actually tested it to verify that it does what it's supposed to do. You're just saying, "I can count on this doing this with this input -- " ERIC: Oh! Yeah, I call it -- I don't have a name for bad call like I might get [inaudible] and you wrote test to conform to the existing implementation of X. CHUCK: Yeah, I don't change the code. So even if it's a method called 'add' and it actually subtracts, I don't change it, I don't refactor it, unless it's being a problem for me. But just to get something around it that says "You can latch on to this and know that it's proven to at least do this much". ERIC: Yeah. CHUCK: And then from there, then I can put assertions around anything that I add. ERIC: Yeah, and that's same thing I do; I won't change a code. I've actually written a spec that kind of like that what I talked about. The spec says "When you call the add method, stuff is subtracted", and in the spec, I say "To do: This is a bugging implementation". I guess the only change I make to the code side is, I might throw a comment in there with the To do saying "This is a bug" or "You cannot reach this line", I mean it's saying "It falls" like you'll never going to get into this branch. And so it's kind of a, especially in Rails because you have the (was it) Rails Notes, Rake Task, or whatever, it will flag all those things and (because most of my projects are in Jenkins and stuff) you can see the graph go skyrocketing when I start exploring new code of all the To do's and all the potential problem areas that I flag like that, but that doesn't actually change how the code evaluate and so it's really safe to do; but, it kind of show some documents and understanding that I have. CHUCK: Yup, makes sense. Are there any other things -- I know I asked this before -- are there any other things around test or refactorings that you have people push back on? We talked about documentation. ERIC: Deployment stuff. I tend to get a little bit of kickback because I wanted to deploy as soon as I can. If I can deploy on the first day, I'm happy. That's because I've done system administration for about as long as I've written code and I know all the moving parts and calls can break. And so, I want to be able to get it so deployments moves still at 2 am in the morning when there's a critical thing, you don't have to worry about "Oh, I need to copy these files, zip it up, send it to the server as [inaudible] extracted. I wanted deployment to be either cap or pushed to Heroku or whatever. And so, I try to make the clients understand that there's business value even putting time upfront to get in the servers and deployment stuff ready. Even if it's not to the production side, if it's to kind of a station side of production, it's like "I know as a developer, I can get a hotfix into production quickly". The biggest case I say is "Look, if there's a security and vulnerability that comes out this morning, we need to have that fixed on the servers by noon", that's almost too blatant for a lot of the vulnerabilities I can come out. And if you kind of describe some of them to them like "Would you want all of your customers' credit card details exposed? Would you want their social security number exposed?" all that stuff. You can kind of prove the business case without really getting into much depth with it. And so on the project, I try to make sure deployment is very automated and, with my own stuff, I use Puppet for a lot of things. So it's like, everything is written down; everything has code that can be run by anyone in the organization that semi-DevOps or developer minded. CHUCK: Yup! ERIC: Well, what about you? Is there other things you have to convince your clients about technical wise? CHUCK: The only ones that I ever get push back as far as technology or technical stuff is, a lot of times, they have it in their head that they want to use this specific library or tool or database or whatever, and they really haven't talked to anybody who has any clue or expertise in how that all works. So -- ERIC: So, like a buzzword factor. CHUCK: Yes, exactly! So they come in and they say they want NoSQL, they don't really understand what tradeoffs are between NoSQL and something like PostgreSQL, even SQL-like, depending on what they need and what the expectations are. So, they want NoSQL or they've heard about EmberJS, and thay want something like a simple to do app. It's like "Well, EmberJS is sort of overkill for that, and if we went with a simpler framework, they'd just give us the barebones; we could do it in half the time and things like that". And so, usually talking through that kind of thing with the client and just explaining what the tradeoffs are, once it becomes apparent that you really do know what you're talking about, most of the time, they'll capitulate; they'll just say "Okay, well obviously you're the expert, that' why I'm hiring you", and then they'll go with it. And sometimes, they don't! When they don't, then depending on how obnoxious it is, if I think I'm going to have be fighting them over and over again -- not on that particular topic because once I agree to do whatever it is, I'm not going to fight them on it; I will try and help them make the best decision for them -- anyway, if it's look like something that's going to come up over and over again, "Well, first we were fighting about the JavaScript, and not we're fighting about Rails versus Sinatra, and now we're arguing over the database, now we're arguing over this and that," eventually, I'll just be like "I really don't think this is going to work because we're getting to the point where you're going to start telling me how the code has to be written. It's not worth it!" But a lot of folks, you start talking to them and they'll figure out pretty fast "Okay, he knows what he's talking about; he's trying to make the right call for me. I trust him", then it just works out. ERIC: Yeah, that's a big thing - it's the trust factor. If the clients wanting specific technology just because they've heard about it, then -- a lot of times, I can sit down and educate them and show them like "Okay, if you go with EmberJS, here's the advantages, and that's what you're looking at that's probably why you're bringing it up. But, here's the disadvantages. Why do you want this? If you want to put EmberJS onto an app that connects to a COBOL database that probably is not going to be a good idea. Maybe PHP might be better to connect the COBOL database, I don't know!" It depends like on how they react to it; if it's like they want to have an actual discussion and go through it and at the end of it, like you said, if they still like "I want EmberJS", then yeah, I'll probably move on and do the project of that. But if you can get the feeling like "Okay, this week, it's EmberJS, next week, it's going to be NoSQL. The week after that (what's it going to be like) -- they're basically trying to micromanage and try to run the technical side of the project without having the knowledge of running it, and that's just a recipe for disaster. CHUCK: Yeah. For me, where it really comes down to is, it comes down to "Hey look, obviously, you don't trust me to make these decisions," or "You don't trust me to give you all the information to make these decisions. And so, this isn't going to work out and it really has nothing to do with how annoyed I'm going to be the 6th time that we discuss a technology decision and you over rooming". And that's obnoxious, don't get me wrong. It drive me crazy. ERIC: Yeah. CHUCK: But ultimately, it's "Look, you need somebody who can come in here and agree with you on all this stuff and just do it - that you feel like you can trust their opinion, because obviously, you don't trust mine". ERIC: The clients I've work with, I don't work with them because they're like the best people in the world and all that, I work with them because I feel I have skills that I can use for them, trying to give them an advantage in their business, and that they can give an advantage to my business whether it's first part I can get money from them so it's revenue for my business, but also I can learn, I can see, there's a whole bunch of other things I can do. And if it's going to come down the pike where they're using me just to someone who can type on the keyboard and affix syntaxes, it's not a good fit. Though the benefit I'm getting is money, you have to pay a lot and/or to kind of get over all of the other factors that I get from a client. And so, if they're going to be -- I've fired a client that was like they were almost dictating like "Okay, you need to go like there's just fringe shows". It was like "Okay, you need to go on this line and put in this JavaScript code". I'm like "Really? You're going to tell me what code I need to write now?" CHUCK: Yeah, and ultimately then I'm not providing you any value because you can open up a text setter, you can go in, and you can stick it in. Git isn't that hard, all of this stuff. I mean, really, if I'm not going to provide you the value that I feel like I can offer, then I don't want to work with you. And really, it comes down to "That's my thing". I want everybody to come away feeling like they got way more value from me than I charge them. ERIC: Yeah. And that comes back to our previous show where we talked about, I think it's like the "Red Flags with Clients", like one of my (because I had it reversed) like the good thing is, I come to a project where I have a lot of the technical experience and I can help them on the technical matters. They come to the project with specific knowledge of their business and how their business works. And only by combining the two can the project be the whole synergy that better than both of our knowledges. If they're going to come with the business and tell me how they do technical, they're not able to take advantage of what I can give them. So, it's going to be a worst project, it can be a worst result, and it's not going to make me happy. CHUCK: Yup. Yeah, I don't want my name on it if that's the way it is. And, I don't want them coming away saying "It didn't work out because he didn't do his job", where in reality, it was just that things didn't mash quite the way that they needed to in order to make it a success. ERIC: Yeah, and this is like a real quick tangent. A couple of times, when the client is really draconian with their technology stack, you might want to talk to them and kind of dig into, sit down on the couch, and tell me about your past stuff. Because I found a lot of times, that the client might have just been burned by a freelancer that they gave "Use whatever stack you want", and that freelancer ran off and did the most shiny new thing that they want to learn, which actually they couldn't solve the problem for the business. So basically, the business site sought "I gave them all this rain and this guy ran off. I got to rid of that guy; this next on, I'm going to really hold on to the rains tight and not let them do anything, and basically, dictate from above what happens. So if you can actually talk to the client to figure that out, you might be able to realize 'that's the pain they have'. And then just by the way you will communicate with them, you might go and say like "Look, I kind of understand you need something a lot more stable than the new hotness. So, let's use some of this older technology; I might not be as fast and efficient, but it's going to get you what you need and going to be stable. And so, that's something to think about, too. It's like "Every client has a past history". And if they had a past history being burned by new tech, that could be influencing how they act now. CHUCK: Yeah. And the other thing is, I found with some of my clients, they actually we're getting advice from somebody else who had various levels of expertise in different things. ERIC: The nephew, right? CHUCK: Or, the brother-in-law. And, the guy's been in the industry for 30 years, but may not be completely up on what's going on with these different technologies. So, he's telling him to do some things, and then I'm telling him to do some things, and the decisions aren't being made in a way that really do help the client. ERIC: They'll probably going to go if the person who told them that they have most trust in, especially if it's family, it's going to, well, most families -- CHUCK: Well, it's not me...yeah... ERIC: Yeah. Most of the time, it's going to be their brother-in-law's friend or whatever. And the brother-in-law friend might not know anything; they might know a lot, but they might not know enough of the business to do it like it could have just been a 5-minute conversation over coffee. CHUCK: Yeah. In the situation that I'm specifically thinking of, the brother-in-law actually work for a large company that had a very popular desktop app, and we were discussing web apps, and so he was really going off of hearsay. Most of the time, he was talking to enough of the right people to get stuff right, but every once in a while, he was talking about things that he really didn't understand. ERIC: Yeah. I mean that's basically you're getting advice at how to fix your car from a rocket scientist. CHUCK: Yeah. And it combust fuel, but...[laughs] ERIC: There's parts that are the same, but they're not the same, they don't work the same in both systems, and you're going to get odd results. CHUCK: Yup. ERIC: Unless, you have a rocket car. CHUCK: Yup. Anyway...Alright well, let's get into the picks! Eric, what are your picks? ERIC: My first pick is a article called "Software Engineers Spend Lots of Time Not Building Software". You can agree or disagree of the actual data because it's, I think, they only surveyed 443 Engineers so it's a very small sample size. But, they surveyed 443 Engineers, found out how much time they spent on different activities, and it's interesting just to see the breakdowns. Myself, I know I have a different breakdown, but I can see like 3 1/2 hours waiting for a bill to complete, 3.7 hours waiting for test to complete, some of those actually are pretty realistic. But it's interesting because cause it's a fairly recent data and I gave this to one of my project managers and he laughed and said "Yeah, this is exactly why I pad the numbers that you guys give us so much when I report to upper management". So, it's just interesting. The second pick, (bit of quick history) I've spent I don't know how many dozens, maybe 100 hours, building like an internal company dashboard. Part of it, it's just because that's where I can experiment with new technology and play with stuff, but actually deliver results to my business. It's gone, I've been on back and forth before I found it a waste of time, scrapped code, all that stuff, and (I don't know) this week, maybe last week, Panic, the software company, came out with an iOS app for the iPad called "Status Board", which is basically an app you get and you have a dashboard on your iPad and there's couple of widgets you can drag on like Twitter feeds, they can connect to email, to graphs, to charts, all that stuff. So I got it, I've been playing with it, and it's pretty amazing like I have it on right now and it's actually going to an external monitor. I spent an hour yesterday and basically wrote 2 custom widgets for myself. One, to track my revenue; and then the other, to track my newsletter. It's nice, it's obviously not open sourced, but we have a pretty simple API that you just write either CSV, JSON, or just HTML files to actually add a new widget on. So, it's really neat. If you have an iPad and you work at home, I found that iPads are really great size for having kind of the information radiator on your desktop. I'll have the link on the show notes you can check out, I think it's like $10 right now, and if you want the external support, it's $10. But $20 compared to the dozens of hours I spent building this myself, it's totally worth it. CHUCK: Sounds terrific. Alright, so I've got a couple of picks. My first pick is "Backbone.js", I've been able to get back into Backbone, and oh, man! It is so nice for just getting your jQuery soup inline. It's just -- I can't say enough good things about it. It's a Model View Controller arrangement for your code and I just, I love it; I can't say enough good things about it. The other pick that I have is a game that I have on my iPhone. It's a Disney game and it's the Buzz Lightyear game. I'm trying to remember what it's called..."Smash It!" It's kind of like Angry Birds except it's Toy Story; you have Buzz Lightyear and he throws different types of balls and things to knock down the towers and chase away the aliens, and it was pretty fun. I initially got it for my kids, but they weren't super interested in it, and then I kind of got hooked on it. I'll put links to those in the show notes. And I guess, we'll wrap up the show and we'll catch you all next week! ERIC: Take care!

Sign up for the Newsletter

Join our newsletter and get updates in your inbox. We won’t spam you and we respect your privacy.