TIM: The cliff note’s version: Logging is abort, and no one can agree on how to do it. It’s easy, so everyone invents it for themselves. All right, that’s good. Let’s…
JAMES: That's a wrap.
JAMES: [Chuckles] That’s awesome.
CHUCK: Hey everybody and welcome back to the 24th episode of Ruby Rogues. Josh hates it when I say “24”.
JAMES: Josh isn’t here.
CHUCK: I know.
JAMES: The real question is, why is Josh not here? because he said something about being busy, but I believe the truth is that he was drinking with GitHub last night.
CHUCK: [Chuckles] Anyway Episode 24. So not here is Josh Susser, and David Brady. On our panel today, we have our guest rogue, Tim Pease.
CHUCK: Tim, you wanna tell us a little bit about yourself?
TIM: Certainly. I'm a Ruby developer who fell in love with the language around 2005. And I have – a little known fact – zero formal software development training; all of my education was in Physics. So, there you have it.
CHUCK: Terrific. All right, we also have Avdi Grimm.
AVDI: Hello, my name is Avdi Grimm. You killed my process; prepare to die.
CHUCK: We also have James Edward Gray.
JAMES: I can't top that. Hi, everybody.
CHUCK: And I'm Charles Max Wood. I'm not sexy today, because I'm in a hotel without my equipment.
JAMES: Chuck, you´ll always be sexy to us.
CHUCK: Oh, thank you. I'm not quite sure how to take that. All my Utah friends are going to shun me now. All right, so this week, we are going to be talking about logging. And Tim is a foremost expert in logging.
TIM: Oh dear.
CHUCK: [Laughs] So Tim, why don’t you go ahead and introduce us to logging, and maybe why we do it wrong or not do it right.
TIM: Okay, should we start off with a definition? I mean, Josh isn't here but we can honor his non presence.
JAMES: Yeah, we definitely need a definition. I mean, you take an axe into the woods and start cutting things down, right?
TIM: Indeed. Let’s all sing the Ren and Stimpy song, “It’s log, its log. Its big, its heavy, its wood.” So logging is… I was thinking about this and how would I define it is any message of note, generated by your application or cluster of machines or whatever. And a message of note is something that conveys meaning or has some importance to you as the developer. And these aren’t things that are meant for the general consumer, but it’s something to help you along the way.
For example, in a production system, specially one that’s running on a bunch of pre-forked Unicorn workers, it’s very difficult to log into that and hook up a debugger, and figure out what's going wrong. You rely on these messages generated by the system to figure out, “Oh my goodness, this is why this is failing.” They are notes to your future self to make your life easier. So in a nutshell, that’s my definition.
JAMES: So I was reading all the homework Tim gave us. He is kind of studious, and he sent us a large email, “These are the 500 things you'll be expected to know, before you can talk to me about logging.” And so I was reading about Log4j, which is a logging framework in a language that shall not be named. But the cool thing in the documentation, there was this great quote from Brian W. Kernighan. I would like to go ahead and read it if that’s okay with you, guys. Where he described logging in kind of a neat way that I have not heard it before.
Here's what he says, “As a personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugger sessions are transient.”
I thought that was an interesting description. I've always seen there's two kinds of programmers; there’s those that love put a PUT statement somewhere to debug something. And there's those that fire up debugger every time. And I thought it was kind of interesting to hear somebody kind of quantify why they make one particular choice. I too, fall on that camp.
TIM: So you fall into the PUT statement camp or the debugger camp?
JAMES: The PUT statement camp. Yeah, I like pretty much never use a debugger. The closest I get is using something like pry or something to start an interactive session, and play a little bit. But I generally do that before I code, not when I find the problem.
CHUCK: Yeah, well I thought it was interesting too, because it’s not a put statement that you just pull right back out; it’s actually a put statement. He almost implied that your put statement should stay with your code… or the ones that are important, so that it’s not just putting out debug output that you are going to take right back out. Because I mean ultimately, then you are just instrumenting your code instead of stepping through it. But rather, you’re getting information out of that, that you are then going to need and keep in whatever login format you are using.
JAMES: Right. And that’s the cool thing about logging frameworks, is they let you control what levels we are writing at. But maybe we are getting a little ahead of ourselves there. So Tim tells me there are lots of different layers to logging. Tim, do you wanna tell us a little about that?
TIM: So I guess what you are referring to there is… I mean, starting out with the simplest logging framework you could come up with would be just putting puts everywhere in your program. And I agree with James, that it works fantastic to get an idea of how the control flow is going. And to see, “Oh, I'm not making it into this branch; so obviously, this must be a nil value here.”
And the first problem with that really simplistic approach is what people have called ‘scroll blindness.’ After a while, if you just watch lines of output scrolling past you, thousands and thousands of lines, it becomes incredibly easy to miss that one error that you are hoping to see. So I guess that’s not necessarily the first level, but that’s the naïve starting point. And the log for j framework came out of this desire. It’s like, how are we going to solve this problem?
CHUCK: With grep! With grep! With grep!
TIM: [Chuckles] With grep. That’s so sad.
JAMES: Grep can track a [inaudible] on a cloudy day.
TIM: [Chuckles] This is true. I do not think that word means what you think it means. I mean, logging is expensive. I mean how expensive it is to flush everything standard out? After a while, just doing that over and over and over again, it’s going to incur a lot of processing over heads. So if you are on a Rails application, and you are like stepping through every action possible request could go through, it’s going to become very costly after a while.
And so these log4j guys set up a plan, “Oh, we need a way to control what comes out.” So we have a wonderful log levels of debug, warn, info -- all the standard ones there. And that allows us in a very customized way to say, “Okay, I'm just going to ignore everything, except for my warning messages. But when I go into debug, I can turn that knob down and say, show me everything.”
JAMES: So you have a horse in this race now. You’ve built a library called, ‘logging’, which maybe isn’t the best name in the world, because it’s kind of tough to Google.
JAMES: Tell us about the logging library.
TIM: So the logging framework, that came about way back in 2005 or 2006. I wanted to use log4r, which is pretty much one for one implementation of the log4j framework in Ruby. And unfortunately, it’s a project that had been abandoned by its developer. And after some time, I'm trying to contact him on the Ruby mailing list and I finally decided, if I'm going to move forward, I just need to fork this project. And that’s the whole reason that Logging came about.
And the biggest goal of this Logging framework is to separate the creation of these messages of note inside your program, from how they actually gets displayed to the user or where they go. Puts again is great, but now you've locked yourself in to everything that's going to standard out. And there's no quick way to just twist the knob and say, “I want all these stuff to persisted to a file now.”
So if you read through the log4j manual, they talk about how you generate these log events, and then you send them over to some sort of appender. And the appender decides, “Okay, I'm going to format this log message in this fashion, and I'm going to write it to a file.” And by doing this then, you can actually have multiple appenders, so you can log to a file, you can log to standard out, you can send your log messages out to sys log. And one of my favorites to do is to send all of my error messages to Growl.
So if I'm working at a Rails application and an error pops up, I´ll get a little Growl notification right on my screen. So that’s the major goal with this Logging framework, is to provide a developer with the flexibility to be able to see the log messages in the way that they need, during development or during debug. But also, using that exact same… help change any of your code, just by tweaking a few configuration files, put this in to production and still have it run quickly, and not have those log messages affect performance very much.
CHUCK: Interesting. They told me that I've been a multiple appender.
TIM: Okay. [Chuckles]
CHUCK: Anyway, it’s nice to know that you can put it in several different places. I do wonder though, if you are putting it to Growl to standard out, or even to a log file, I mean, sometimes you get into.. you still wind up getting into tail -f, where it’s scrolling so fast, and so furious. Is there a way to separate your logs or to split things up, so only certain things go to certain places. How do you manage that?
TIM: Each object in your system can have its very own logger, with the logger class that comes with the Ruby standard library, you create one of these objects, and its used globally to your entire application. And with the logging framework, each object in your application can have its very own logger that’s defined by your object’s class name. And prevent the tail -f scroll blindness, what you can do then is set to the debug level, the specific class that you want to debug, or a collection of classes that you want to debug. So every other class in your application now is logging… is only going to show warning messages or higher, but just one or two things you really care about at the moment, you can set to this debug level. And that definitely cuts down on the sheer volume of messages that are coming out.
JAMES: Yeah, I thought that was really interesting when I was looking through the logging library this morning, and see what you've done there. So basically what you are saying is, if I have my Rails application and I'm trying to figure out, I'm getting some error reports or something about some complex interactions users have when they are subscribing or unsubscribing to things, then what I can do is I can actually go in there, and change the logging level of my subscriptions controllers. And then that means that whenever the Rails application is doing something inside that subscriptions controller, then I´ll get my debug level output, with all my copious messages in there that help me find stuff, but the rest of the application can still function under its normal warning level or whatever. I thought that was very awesome.
TIM: Yes, and even in a production Rails system, you can see how that will be invaluable. So you deploy this thing to production, and you just tweak your configuration settings to say exactly, “I just want to debug these one or two things.” You’ll have minor performance impact there, but nowhere near as much everything spitting out debug messages.
CHUCK: So how do you feel about the global loggers? Are they wrong? Or are they are decent fallback or do you hate them or what?
TIM: It depends on what you need. You do the simplest thing you need, until it no longer works for you. So if the core Ruby logger works great, and that’s really all you've ever had and need for, then stick with that. I'm getting ahead of myself here, but I´ll just throw this out there: one of the problems with core Ruby logger is when people set it up in Rails applications, I've gone into production apps so many times, I'm seeing 20 Gig log files siting there. And setting up a growing file under, is probably one of the best things you can do. There are ways to do that in the core Ruby logger, but most people just do them on the contacts Rails application. So yeah, I have nothing against it. Use it until it doesn’t do what you need it to do anymore.
CHUCK: Well I´ll tell you, if you have a 20 Gig log file and you try and grep on that sucker, yeah, you are going to find out real fast that the rotating log is a good idea.
TIM: Yes, or when your machine comes to a grinding halt because you've run out of disc space. That also will get your attention.
CHUCK: Yeah, or when you try and open it in your text editor and it says, “I can't put this whole thing in memory, I don’t know what to do with it.”
TIM: [Chuckles] Oh, those things.
JAMES: So you were saying that the core Ruby logger can be made to rotate, and I actually do know this. I have an intimate familiarity with that code, because I had to change it once upon a time. And that’s because I was working with multiple processes. And in that case, the core Ruby logger is not the greatest thing in world when it does log rotation. Do you wanna talk about maybe why that is?
TIM: Yes. The problem with having your file descriptor snatched out from underneath you. And if you have two processes that are both writing to the same log file -- which can obviously happen -- if one of them says, “Okay, it's time to rotate this log file, either based on age or size,” the other process won't know that this has happened, and will happily continue writing to its old file descriptor. But unfortunately, that file descriptor will no longer connect it to anything. So, you are going to lose all your log messages.
CHUCK: Log to /dev/null. Sweet!
TIM: [Chuckles] You know, that scales. You can write to /dev/null all day long. It will not impact your system.
CHUCK: That is so true. If you want fast logging, /dev/null is your friend.
JAMES: [Chuckles] Unfortunately, yeah.
AVDI: /dev/null is web scale.
TIM: It is web scale.
TIM: /dev/null logging appender: there is nothing like it.
CHUCK: I know something you don’t know; /dev/null is not left handed.
JAMES: Nice. So Tim, how do you solve the multi process logging problem in the right way?
TIM: Okay, so this is going to get fairly technical…
CHUCK: Wait, this is not a technical podcast.
TIM: [Chuckles] I'm so sorry. So one way to solve it is just to never roll your log files; let it continually grow and grow and grow, and stop your application and manually roll the file and then start it up again. Obviously, that has its drawbacks, in that you are going to forget to do it. And so the proper way to do it is internal to your program, you can use shared file locks; so each time any process goes to log, it obtains a shared lock on the file, write its log messages out, and then says, “Okay, half my rolling criteria have been met. Is my file old enough or is it big enough?”
And if either those are met, then the process is all [inaudible] until someone gets exclusive lock on the file -- and these are all Unix level shared and exclusive locks. And the one who does get the exclusive lock goes ahead, rolls the file, and then continues to log again. So the next time each process wants to log, it’s going to say, “Oh, my file descriptor has changed out from underneath these, so I need to reopen my log file.” And that’s all handled internally by the rolling log file appender.
CHUCK: So that leads me to another question then; if I have a really, really long log message, I mean something that is going to actually take maybe even a second to log, because I have some insane output something, is that going to block everybody else while they wait to log, or is this usually handled in the background somewhere?
TIM: This is the magic of the kernel: it will take care of this for you. It’s not going to block the other processes, because as you are writing to the log file, you have a shared lock. So the other processes can also obtain a shared lock and write to the log. After that insanely huge log message gets written, someone is going to have to obtain an exclusive lock, and the kernel, all processes will have to wait until that exclusive lock is released.
JAMES: So you should only have to wait while rotating log files, in other words. And one of the traditional ways of rotating log files is to put a 1.2.3 to kind of signify that they are in sequence. And that takes a little while because you got to shift them down the line. But I saw in Tim stuff today, he had an example in there where he showed how to do them by date instead. And then if you are rotating your log file everyday and you just rotate it to the date name of the day, then you won't have to do all that log file shifting, so it makes the rotation much faster.
TIM: That is correct. And with that shifting there, it’s a file rename to shift one file to the next. And the rotation code is pretty smart, where it does a copy and then truncate. So your current log file is copied to a temporary file name, and then it’s truncated underneath. And basically, what this does is it allows all the existing file descriptors to remain, so you don’t have to worry about reopening a new file; you just continue writing to it.
JAMES: That’s awesome. Some programs support a signal that you can send to ask them to rotate their log files, so basically refresh them. And that helps when you are using something like logrotate, but it’s interesting that Rails hasn’t really picked up that idiom.
TIM: No. And the logging framework that I have also does not also support that idiom. I like using signals for a different trick. When I set up an application, I will actually register a signal handler for the user 1 signal. And what that will do, is that will change all of my loggers to the debug level. So on a production system, I can just send user 1 signal, and now, I'm going to see all my debug output. And when I'm done with that, I just send the user 1 signal again, and it restores everything into its previous states.
JAMES: That is an awesome trick. Thank you.
TIM: Oh, you are very welcome.
CHUCK: So I have another question related to logging; and it’s not so much about like the structure of technology behind the logs, but its more about the implementation of the logging. So for example, let’s say that I have a class that I need to log some information about. How do I balance putting enough information in, versus putting too much information in? I don’t wanna put in so little, that it doesn’t tell me what I need to know, when I need to know it. But at the same time, i wanna make sure that I put enough in, but don’t kind of go overboard on that. So, where’s the balance there?
TIM: That is a hard question to answer. Generally, I look at the five standard log levels that the Ruby logger defines: DEBUG, INFO, WARN, ERROR, FAIL. Starting from the top and working down, I only send out fatal messages if I'm about to kill a Ruby process. For example, something critical has happened, and my program can no longer carry out it's core functionality. I´ll log that in the fatal message, and then send the system exit command and call it.
Stepping down from there, an error is something bad has happened, but I can work around it. An example of this would be, I can't make an HTTP connection to an upstream server, or I can't connect to the database, but I can retry again in 30 seconds.
Warnings, those are a little more ambiguous. I don’t have anything good to say about that. You can see where it kind of breaks down. A warning is something like this error, this problem has occurred or exceeding our bandwidth limits, or it had to throttle this user down because they are exceeding their API limits.
Informational messages are very handy, just to let me know that my program is still running. I'm doing this, I'm doing this, I'm doing this.
And then debug are probably the most important ones, even though they’re at the lowest level, because those are the messages to my future self -- to help me figure out why is going wrong. That’s when I'm looking at something, trying to figure out why am I getting into this bad code branch? Why am I getting a nil object reference here?
And those are the ones that you need to think about the most. You are overdoing it if every other line is a log messaging; and you are underdoing it if you don’t have any log messages inside your class. So those are my two guidelines there.
CHUCK: So that leads me into another question then, because it seems like active record logs your development log, like all the time. Every time you access the database, it logs the query and the results and its mother, and so I'm wondering,
JAMES: [Chuckles] Its mother.
CHUCK: I mean, for me, it almost feels like they sort of overdo it. I'm just wondering what you guys think. Did it overdo it? is it too much? Is it too little? Is it about right?
JAMES: I really liked active record’s logging, because 9 times out of 10, when I'm investigating some problem or something, it comes down to what was sent to the database. What question did I ask, or how did that fetch these records or something like that. I think it’s because it’s such a pivotal backbone of a Rails project, what query was sent, that we almost have to have that information to be able to reason about what our application is doing.
TIM: I would concur with James, I really enjoy seeing the SQL queries that are generated. That helps me understand you if I have fat fingered an association or relationship, or specially with things like has many through, and these deeply nested association that can come up in Rails applications.
CHUCK: I guess I'm just generally… I don't know, I hate watching it like fly by and… I have a couple of applications where it does some work to set some other things up, that require to do some callbacks to the server. And so, a lot of times, I wind up scrolling back down through my log because it does what I'm interested in, and it does all the other set up work through AJAX. And so, it kind of puts me off, because it makes it harder for me to find what I wanna find.
TIM: Well, a trick that you can do is have all of your active record logs go to a separate file. You can configure an active record base logger, and send it to the debug level , and attach an appender to just that one logger, that opens up database.log file. So all of your active record things will end up there. And meanwhile, you can send informational active record log messages higher up the appender chain, and see all of those in your standard log output.
CHUCK: That makes a lot of sense. I'm going to name my custom log, Paul Bunyan.
JAMES: So, Tim, you talked a little about how there’s different appenders, and you can actually use them in tandem, so you can set up appenders to write to standard out a log file.
TIM: Regarding how much or how little, there is a sneaky little class inside of the logging framework called proxy. And you can wrapper any object inside of this logging proxy. And so it will then log all methods that are called on that object, inside of your application.
JAMES: Interesting. So you can basically trace interactions with a certain object, right?
TIM: Exactly. And we find that to be useful – where I work, at least -- when we are interacting with our Cassandra cluster, and we want to track all the different requests that are being made through the interface.
JAMES: Very cool. I was actually going somewhere different with this though.
TIM: Go ahead.
JAMES: You talked about how you can use the different appenders like in tandem and stuff, so you can get the messages to standard out, and error messages in email or things like that. You send out your four faster ships and see who gets there first. The other thing I saw when I was looking through the logging framework is you have the layouts and stuff, that work with each appender. And that turns out to be kind of cool, I mean you can do the typical logger things of colorize this message or format it this way. But what I was very interested to see was things like the JSON and YAML format, so you could actually have parsable logs.
TIM: Those are wonderful. And they do exactly what their name says. Each log message will be a JSON object or a dictionary. But yeah, it’s great for when you just need to quickly parse back through those log messages, and find the information you are looking for.
AVDI: I have a kind of related question about that formatting; I think you kind of partially answered it talking about YAML and stuff like that. I'm just curious if you've just evolved any conventions. I mean, logs are freeform text. Have you evolved any conventions for how you format your log statements?
TIM: That is a fantastic question. And the only two rules of thumb I will say is never, ever, ever, ever, ever -- for any reason whatsoever -- ever, create your own date stamp format – ever.
JAMES: [Chuckles] Ever?
TIM: Ever. Don’t do it. When you log date stamps or time stamps into your log files, Unix seconds are horrible; who knows if your Unix clock is set right, you have no idea what time zone you are in, if you are logging for Java, are they seconds? Are they milliseconds? It's all very confusing. There’s a great standard, ISO8601, which describes timestamp and date stamp formats. The default, and this is what a logging framework uses for its default date format output. And it's basically year, month, day, hour, minutes, seconds, microseconds, and then a time zone after that.
But yeah, I mean, you look at the apache logs, and they have these very English-centric date stamps, OCT 6-01 and you are like, okay so October if you are on an English speaking country, and then is 6th the year or is that the day or is 10 the year? And just all these very confusing things. So that is my one word of advice about date stamp formatting.
The second part of that is I would format my log messages, in order to help out whatever system is going to consume the log messages. If it's a human, I'm not going to spit out a bunch of JSON; that’s hard for a human being to parse. I´ll make a nice, pretty printed format. If it's a log analysis package or something like that; for example, Splunk is a fantastic piece of software that can rip through all your log files and give you very, very usable data out of them. And it really enjoys having key value pairs. And so key= and then whatever the value is. And Splunk then allows you to search for all of your keys, plot graphs if it's numeric data, things like that.
AVDI: How about new lines? Do you ever put new lines in a log message or is that a no no?
TIM: Hmm… I do not have a strong preference. It just depends on what’s going to be consuming those messages. And when I emit back traces for a human readable log, I will do new lines with tabs.
AVDI: Okay, one last formatting question;: often, I found it useful to log sort of a bracketed context, so when I enter a particular context, maybe I'm doing a particular procedure or something, and then log when I exited. And then I can go in and I can look at exactly what happened in between those two markers. Did you ever like evolve a format… first of all, do you do this? And secondly, do you have any kind of format for bracketing those, that you can then easily go in and sort of pull out these contextual chunks?
TIM: The logging proxy object will do just that; it will mark, “I'm entering this method.” “Now, I'm exiting this method.” And optional, you can spit out whatever the return value of the method is. So any log statements inside of that method or subsequent methods that it calls on down the stack, will be bracketed by these nice enter and exit terms. I'm not precisely sure which format I'm dumping them out in, but yeah, that's a fantastic idea.
JAMES: I have one more tip that I've evolved overtime that I really enjoy in my logs, and that’s I like to change my default… you know, it's usually spits out like date and time, the level of the message, blah, blah, blah and then after that. I like to add one more detail to that list of things that always gets spit out for every message, and that’s the process ID. I find that absolutely invaluable -- especially when working with a multi process application like Rails that’s generally I'm running 3-5 copies of, so that I can tell, “Oh, this is that one guy doing his stuff, and this is that other guy doing his stuff.”
TIM: That is a fantastic idea. One thing that I've not really gotten a good grip around -- especially in the context of a Rails application -- is how do you identify all the log messages associated with a single request to your web server? And that’s hard because it's either you… if you have multiple processes all logging to the same file, you are going to end up with interleaved log messages between two separate requests coming in on two different servers. So, any thoughts on that one?
JAMES: Yeah, it's really tough in just basic Rails, I think you would almost have to split out the log files by process ID, to realistically achieve something like that. And then if you are doing anything like evented with say like Thin or something, then you are still going to have issues with that.
TIM: So one of my constant frustrations with trying to use a unified modeling framework in an application is whenever i have to pull in third party code. And these for example SASS, these wonderful, “I love it to death, it makes my style sheets so much easier to use,” it can used from the command line. And because of this, it has its own set of logging instrumentation inside of it. And if I'm debugging a Rails application, I really wanna be able to see my output from SASS. But SASS doesn’t necessarily agree with the way that I want to do things. And it becomes very, very painful to bring together a bunch of different libraries and to have a bunch of ideas about how logging should be done.
JAMES: Yeah, that’s a good point. I mean. Like a lot of things, even seems to -- sphinx comes to mind -- starts up its own log files and just basically completely ignores what you are doing.
TIM: Yeah, and going back towards the beginning of the conversation -- about the standard library logger class -- I think it's great, but if everybody is spinning up their own loggers, it can lead to a bit of mayhem inside of there.
In Java land, they have this horribly named thing called SLF4J, and it's of course it's an acronym. I think it's… I can't remember what the ‘s’ stands for, but it's System-Wide Logging Factory For Java. And the concept here is that all your various Java jar files, say I am going to obtain my loggers by using this factory gem, and then use the application writer can go in and configure the factory gem to provide everyone with your logger of choice. Unfortunately, there's nothing like that out in Ruby land. And it will be simple enough to write, but the biggest barrier entry there would be getting community wide adoption of such a creature.
JAMES: Right. It doesn’t really work unless everybody is using it.
TIM: Exactly. So, I don’t know. I mean, how do you convince everybody, “Use this gem for your factory stuff; it will make life easier.”
JAMES: I don’t know. I think things like that are harder to do in Ruby, where we are a lot less corporatey and enterprisey, and things like that. I think those kind of initiatives are tougher to push on us. I'm not sure if that’s feature or not.
TIM: [Chuckles] That's the hard part there. I would certainly love to see something like that come into play, because as gem developers, you definitely need some sort of logging facility. But as soon as you choose one, now you've imposed that decision on everyone who uses your gem, and that might be very different than what a particular application needs. So it's something that needs to get figured out.
AVDI: We've seen that it's possible for the Ruby community to kind of settle on certain types of utilities. I mean, Rack is kind of the defacto standard for building web apps these days.
JAMES: Bundler seems to be gaining ground, as far as how to specify your dependencies.
AVDI: And everybody writes their project scripts or most everybody write their project scripts using Rake. And so, I think with a sufficiently general and simple tool -- and some awareness -- maybe we can see more of these libraries just using something that already exists, instead of rolling around.
TIM: That would definitely be a wonderful thing to see, yes.
JAMES: So we can take our first live Twitter question on the air. Somebody asks, “Where should I log to? flat file, database, Twitter, etc.” I'm partial to Twitter myself.
TIM: This is going out live?
JAMES: Well, Avdi asked for questions on Twitter, and we got a response.
TIM: Oh, okay.
JAMES: So somebody wanted to know where to log to. And so to kind of tie that in with something I saw in logging this morning, there is the sys log appender. Could you maybe talk about why you might use this log appender some time?
TIM: At work, we use the sys log appender because that’s the only way to get information in to our Splunk platform. So the way it works, syslog aggregates everything down to one machine, and then Splunk parses the log files on disk, and provides you with a very nice view into it. There are a few short comings with sys log: number one, I think you are limited to 2,024 bytes for your message size. So if you want to log in a single, vast quantities of data, syslog is not your answer.
The other problem is there is a bit impedance mismatch between syslog, and the core Ruby logger. The core Ruby logger has five levels, and I believe syslog defines up to… someone can correct me later, but I think it's 11. I mean, there’s a trace, a debug, a fatal… and oh my god, what just happened, blah, blah and things like that. So the way you get around that is you provide a mapping, where a debug level in Ruby land would map to syslog debug. A fatal in Ruby land would map the highest syslog level there. And yeah, we use it to great benefit; it works very, very well.
JAMES: Let me fire one more question, since we have Tim on. Tim, you've done other things than just logging. Do you wanna talk about any of your other projects? Maybe Mr. Bone, Servolux, Directory Watcher.
TIM: Oh gosh, you know what, I think servolux is a good example of a project that needs to incorporate some sort of logging mechanism into it. So servolux is a more googlelable name that I came up with after my debacle with logging library. And the whole goal of this project is to provide a cohesive set of tools, for creating server-type processes in Ruby, and provides [inaudible] children and communicate with those children via a pipe. It provides threaded module that you can include into any object, and it will provide you with a threaded control inside that object. All you have to define is a run method, and it provides you a way to gracefully start and stop that thread.
And all of these things build together, until you end up with… at the very top level, you have a servolux sever class, which you can reuse to spin up a server process that can handle signals, and along side of that server class, you also have a servolux daemon class, which can then daemonize those servers or any other command you can send to them.
And the way logging ties into this is during a daemonization process, you need to make sure that your server started up correctly, and is running. So one trick that I will do there is after all the server initialization has spun itself up, connected to databases, and started processing output, a single line messaging; a log file that says, “I'm running.” And the parent process that spuned off this daemon, does the two fork dropped process in your magic that is the recipe for forking. And this daemonizer sits out there, and looks to see if the child PID is still running. And then it starts tailing a log file, looking for that one message of “I'm running.” And when it sees that, it says, “Okay, everything is good. I can go away now. Let the daemonized process continue on.”
So the hard part is I didn’t want to confine anyone who wants to use the servolux gem, into necessarily using the logging framework. So the trick there is using dependency injection; when you spin up your sever, you have to pass to it a logger instance. And we talked earlier about SLF4J in Java land. Another way of doing this is just everybody, when they create a gem that says, “I'm not going to create my logger; I'm going to let someone pass their logger to me.” Yeah, that was a big of a rambling. [Chuckles]
AVDI: I just wanna say that servolux is awesome. If you are doing anything involving managing processes or creating daemons, or having pools of processes, or anything like that in Ruby and you have outgrown the built in process facilities, which is pretty easy to do, you should be using servolux. I've been using it for years, and it’s just like the Bentley or the Escalade of process management.
JAMES: That and the name is amazing, right? I mean, it's second maybe only to RoUS ‘s.
CHUCK: I don’t believe it exists. Well that’s really cool, I'm going to have to check that out. All right, well let’s get into the picks. Avdi, go ahead.
And secondly, something I've been using a bunch lately is Screenr, which it’s a service for making screencasts. And it just makes creating quick screencasts dead simple, and takes all the fiddly bits of like encoding and uploading, and hosting somewhere and giving people a playback widget and all that. So I've been using it with all my client work, just doing like daily screencast of progress, so they can see exactly how things are going. And if they wanna say, “Oh no, that’s not what I had in mind; do it this way instead.” It works really, really well. And the really amazing thing about it is that it’s completely cross platform; it launches from a webpage, its cross platform, it even works on Linux. So yeah, I'm really happy with that service.
CHUCK: I like that. It even works on Linux. That’s amazing. All right, James go ahead.
JAMES: Okay, so let’s see, my picks for this week. All right, I'm a big podcast guy as Chuck, I love listening to podcasts, and I've listened to some I enjoy in the past. I'm going to give you two that have really been eating up all of my listening time lately, and that’s Freakonomics Radio is the podcast. They have a blog and everything. I'm sure most people have probably heard of the books Freakonomics and Super Freakonomics. Absolutely excellent podcast. I think it actually gets better as it goes. They had a recent episode on quitting and why people do it, how they do it, is it good for you kind of stuff. Hitchhiking, I mean, it's just everything. They look at everything. It’s totally interesting. The parenting episode is one of my favorite of all time. It's really a good show. So I can't recommend Freakonomics enough. And definitely read the books; they are great as well. So Freakonomics, it’s interesting stuff.
The other one I've been listening to is RadioLab. And RadioLab was a little longer for me to get into, to be honest, because I really enjoy science podcast, but I tend to listen to science podcasts, that are more on the answers side of the equation, than the question side of the equation -- which I think is where Radiolab sits. But the interesting thing about Radiolab is they are very good at telling a story and weaving a picture together, so it’s almost like This American Life, with kind of a soft science content. And it’s pretty interesting stuff.
And if you do decide to listen to RadioLab, do yourself a favor and put some headphones on while you do it. They do some every interesting things with sound in that podcast, that are super neat to listen to, and a lot of time, will kind of surprise you. And they talk about those as they go, but it’s really neat to just experience them. So, I recommend RadioLab with headphones. So those are the two things I've been listening to lately, so hopefully, other people will find them interesting.
CHUCK: Plus one for Freakonomics Radio. It’s great podcast. I haven’t read the books, but the podcast is always fascinating. Tim?
TIM: My first pick this week would be an Apache incubator project called Kafka, and this is a fantastic piece of software, a kin to Facebook’s Scribe platform. It’s a distributed published subscribed messaging system, and it can handle about 50 Megabytes per second on a single node. We are starting to look into this at where I work, for aggregating log data to a single location. And also for just general pub/sub means. So it looks very promising.
My second pick this week would be for community, and just thanking people for the contributions they make. A few months ago, someone wrote me a kind word through GitHub’s messaging system just saying thank you so much for the logging framework. I think it’s a really good thing. And I go back about once a week, and just read that. It's kind of inspiration to keep going, to do open source thing. Just knowing that someone is using your software and is thankful that you wrote it. And I've been trying to apply this more in my life, so I just go through my list of installed gems once a week, I pick a gem, and I just write a little note to the author just saying, “Hey, thanks for writing this. I use it and it makes my life better as a developer.”
JAMES: Well, thank you Tim for writing the Logging framework.
TIM: Oh, stop. [Chuckles]
CHUCK: Yes, Thank you, Tim. Thank you, Avdi. Thank you, James. Thank you, listeners. Thank you, world.
JAMES: Thank you, Chuck for hosting the Ruby Rogues.
TIM: Yes, we need to hear Chuck’s picks.
CHUCK: Oh my picks. It’s my turn?
TIM: Yes, it’s your turn now.
CHUCK: Now, where did I put those picks? I think I left them with the albino. Anyway, so my first pick, -- and I have to back this up with a little bit of story -- I've been working with a client to try and get ---- installed, and he insisted initially on putting it on FreeBSD. And I am not a big FreeBSD person. In fact, that was probably the first time that I've used it for more than 10 minutes. And so I was stuck in corn shell land, and I had to figure out how to install these packages and stuff. And so my pick this week is Bash and Linux.
And really what it comes down to, it’s not so much a knock on FreeBSD, or on the cornshell or anything else, but it really just comes down to knowing your tools, knowing how to use them. Because honestly, knowing… it could have been just about any distribution of Linux. I'm pretty familiar with quite a few of them. And so as you get to know your operating system, you get to know bash, and what’s it capable of and what you can do with it. You get to know some of the command line tools that are in there. You really find out that you have some powerful, powerful things at your fingertips, that you can use to get the job done. And if you are in a foreign environment, one that you are not as familiar with. And I used to support Windows, and now that’s even becoming foreign in a lot of ways, because I'm not as familiar with the newer versions. But knowing your tool, knowing what you can do with it is really a great thing. So I guess my pick is knowing your tools, but really Linux and bash for me are kind of the way to go. And that’s pretty much it.
That's all I've got. So we'll go ahead and wrap this up. I wanna thank Tim again for coming on to the podcast. I'm not joking around with that. We all really appreciate it, and we really appreciate his expertise. On our panel, we also have Avdi Grimm,
AVDI: Have fun starting with Castle!
CHUCK: James Edward Gray.
JAMES: As you wish.
CHUCK: Tim, do you have anything else to add?
TIM: If you think it will work, it will probably take a miracle. Bye!
CHUCK: [Chuckles] And I'm Charles Max Wood. I'm on the brute squad.
TIM: You are the brute squad.
CHUCK: That's it. We'll catch you next week!