iPhreaks

The iPhreaks Show is a weekly group discussion about iOS development and related technology by development veterans. We discuss Apple, tools, practices, and code.

Subscribe

Get episodes automatically

042

042 iPhreaks Show – Concurrency with Jeff Kelley


The panelists talk to Jeff Kelley about Concurrency in OS X and iOS.

This episode is sponsored by

comments powered by Disqus

TRANSCRIPT

PETE: Yo dawg, I heard you liked analog technology? CHUCK: Hey everybody and welcome to episode 42 of the iPhreaks Show. This week on our panel we have Pete Hodgson. PETE: Good morning from Silicon Valley. CHUCK: Andrew Madsen. ANDREW: Hi, from Salt Lake City. CHUCK: Ben Scheirman. BEN: You’ve been fined one credit for violation of the verbal morality statute. CHUCK: Jaim Zuber. JAIM: Hello from Minneapolis, and I’ve got three [inaudible]. CHUCK: I’m Charles Max Wood from DevChat.tv, and this week we have a special guest and that is Jeff Kelley. JEFF: Hello from Detroit. CHUCK: I was so waiting for you to say that I said your name wrong. JEFF: That has actually happened to me one time. They said, “Kell-ey.” CHUCK: Okay. Do you wanna introduce yourself for us? JEFF: Sure, I do iOS apps for Detroit Labs here in Detroit. I’ve been doing iOS since about 2008. I wrote a book called Learn Cocoa Touch for iOS; it’s about two years old now – it was iOS 5, so you should probably find a newer book if you’re getting a book right now. That was published through APress, and now I’m doing iOS apps full time. CHUCK: Awesome. So we brought you on to talk about concurrency. When I hear concurrency, I think ‘threading.’ Is that generally the way most people go, or is that’s just me? JEFF: Yeah, I tend to view concurrency as solving a problem. The typical problem is, I push a button then it takes a while before the thing is done, so threading is one approach to that. You can also use GCD, NSOperationQueue – I’m sure we’ll get into those. But it all starts out with ‘I have this thing that I want to do and I don’t want to block them and thread to do it.’ PETE: Maybe you could go back to the start which is [inaudible] to kind of enumerate what our options are for doing stuff concurrently or in parallel or whatever, in iOS. JEFF: Sure. Pretty much everything I’m gonna say applies to OSX, since they share so much. So we’ll start at the bottom: we’ve got anything you can do in the standard UNIX threading model so if you really wanna spin up pthreads, go for it – that’ll work just fine.  So you can have multiple threads running code at the same time there’s an object-oriented model on top of the NSThread that gets you the nice objective-C interface to it. Most people don’t use that these days; you’d use a thread if you needed something going in the background, just spinning through a loop or waiting on an event. Most of the time now people use either a Grand Central Dispatch, which is a C-API for dispatching blocks. It’s pretty good – there’s some overhead in learning it because it’s C-based, not objective-C, so if you’re not used to a C-API, there can be some gotches. And then on top of that is NSOperationQueue, which is a further abstraction of GCD, and the idea there is you have operations that are little tasks that run on queues, and you can enqueue them and wait for them to finish. Finally, there’s just API-level stuff, so if you’re sorting an array, or enumerating a dictionary, there are API calls you can do that do those concurrently without having to set anything up. BEN: What is your decision point on when to use what? JEFF: I always try to use the highest level thing that’s going to work for my needs, so if I can get away with just using the NSDictionary API, I’ll use that. If I need to drop down to an OperationQueue, I’ll do that. A typical example of an OperationQueue scenario would be, I need to download an image and that download itself, I’ll probably use AFNetworking and then it’ll be its own NSOperation and that’ll run in a network queue, so I don’t have a thousand network requests going out at once. And then let’s say I need to resize that image, so that would be a separate operation that would run maybe an image resizing queue, and then finally use that image once those are both done. BEN: So just using that as an example, like how do you structure that? I know there’s dependencies between NSOperations, or you could just have one operation call back and then queue up the second one with the output of the first. How do you typically set those things up? JEFF: Right. So the big feature of NSOperations is those dependencies, especially that they can traverse multiple queues. I can set up a network operation, and then immediately after I set that up, I can set up the resizing operation, and just hook up the output from one into the input of the other. And then I’ll put the second one on the queue first, add the first one as a dependency, then add the first one to its queue, so that I ensure – especially with the 64-bit simulator, I noticed that sometimes it can run so fast, that your operation finishes before you have a chance to add it as a dependency. BEN: Yeah, that’s a good tip. JEFF: So you kinda work backwards. You get the final state set up; add the dependencies, get those set up; and then finally, you get the first one that you actually start. BEN: How do you correlate the output of the first? How do you pass that data from the downloaded image, for instance. Because one option for creating an NSOperation is just take data in your constructor, but if you’ve already created the operation then how do you pass it along? JEFF: I’m big on properties, so an image download operation might have some properties like UI image or an NS string for a path, and then I’ll just set those in the operation so that when the next one goes, it has a property for the first operation and it can grab what it needs, and then if those are nil then I know that there’s an error that’s happened. You can also do block operations, so you just initialize the operation with a block, and then in the scope where you make all of those, you can define some variables that you set that are passed from one to the other. BEN: So is the setting of that dependent, like the dependent operation property – is that something you would set up yourself, or is there any kind of facility built in for getting a reference to your dependent operations? JEFF: That’s something I would set up myself – there’s probably a library somewhere in GitHub to do it. PETE: So GCD is taking responsibility for putting things on the right thread and then kind of firing off your NSOperations in the right order, but your code is responsible, or the code that we write is responsible for actually moving data through that pipeline. Is that right? JEFF: Correct. PETE: Okay. Huh. I wonder why they don’t just – I kinda was hoping that you were going to say, “And then it just magically appears!” Like [inaudible] or something. JEFF: Yeah, there is a GCD data type – it’s like dispatched data, and so you can use GCD to pass data around, or if you have a huge data site you can operate on individual parts of it, concurrently. It’s pretty low-level of an API, so you usually don’t need to do that unless you have extreme performance needs. Typically, it’s just easier to get those things in the operations, or a lot of times what I’ll do is I’ll have – for downloads – I’ll have an algorithm to say this is the file name of the thing when it’s downloaded in the file system and just use that file. So if I’m downloading anything like a 2Gb image, I’m not going to keep that all in memory, so I’ll have a [inaudible] that just writes that image to the cache, and I’ll know the path of that image when I go to load that in imageIO to resize it. PETE: Okay, so the file system kind of is your scratch pad for [inaudible] stuff between these different asynchronous operations? JEFF: Exactly, yup. PETE: Cool. If you’re doing stuff with just studying properties, do you have to do anything special because your passing things potentially across thread boundaries? Do you need to do some magical invocation [inaudible] or can you just set the property just as if it was all running in the same thread? JEFF: Most of the time you can just set the property. The nice thing is that when you set up an operation is the dependency – it won’t start the second one until he first one says, “I’m finished.” And so as long as you set all your properties before you say you’re finished, that is a read-only situation. PETE: Okay. And does GCD have any built-in capabilities for doing stuff in parallel? Like [inaudible] you wanted to say – let’s say you’ve got some huge – this is a silly example, because it’s always the example people use and I’m not really sure why you’d ever do it on an iPhone, but. You’ve got some huge array and you wanna kind of do some expensive operation and you wanna light up all the calls of the iPhone, say, wanna run four calculations in parallel across that array, and then kind of join the results together at the end. Is that something where you’re going to be doing most of the work yourself, or is there some magical API that Apple has provided to help with that? [inaudible] JEFF: There is, in fact, a magical API. The first thing you do is you would say dispatch get global queue – you’ll get a background queue with a priority from high to low, and then there’s a function you can call called dispatch apply. You basically tell it, “I need to run this n number of times on this queue, and here’s a block.” And the block is what gets called every single time. So you’ll have your index there, and you can operate on your array of data, and that will do the thing where it automatically scales to the number of chores, make sure that you’re not –. PETE: Oh, neat! JEFF: It just makes sure that everything isn’t waiting, so. Let’s say you have eight chores on your Mac, and you run the – and every single core is waiting on the disc; it’s smart enough to spin up more while those are waiting to use the CPU. PETE: Oh, wow. That’s pretty awesome; that’s pretty kind of advanced stuff. JEFF: Yeah, and the really nice thing on OSX – I mean, you never see this on iOS because there’s only one app really running, but is that every single app has the same kind of global background queues to use, so the system is really smart about looking at the picture holistically and making good decisions about which processes to prioritize. If you were writing NSThread, every single thing would have to do the horrible step of finding out how many CPU cores there are, and then try to figure out how busy those cores are, and try to figure out how many threads I should spin up and not have dispatch work under those threads, and make sure I don’t get the CPU too busy. It’s a lot of work, and importantly, it’s a huge surface area for bugs, and those are the worst, the worst bugs. So you don’t wanna write that code at all. PETE: And presumably, if you’re running on an iPhone in particular – in iOS, specifically – you don’t have that kind of high-level perspective, because you can’t see what other processes are doing kind of for security reasons, whereas the operating system obviously does know what iPhones do, so it has enough knowledge to manage things a little bit better than you would be able to in your own application, I suppose. JEFF: Yeah. ANDREW: [inaudible] question. JEFF: Sure. ANDREW: I think we’re talking mostly about NSOperationQueue and NSOperation so far, and I’m a little curious to know why you would use the C-GCD API in favor, instead of NSOperation. Is there a good reason why you want to use the C-API, or cases where it does things better? JEFF: Yeah, so one canonical example is all of the dispatch async and the dispatch sync – those all have function points or equivalents, so if you already have some UNIX C library using that C, you can just [inaudible] to a function you wanna call it instead of a block, and that can be pretty powerful if you have a legacy code that you want to integrate. Most of us don’t, though. Most of us are using newer APIs than that, but there are some things that you can do in GCD that you can’t do with OperationQueues or at least you can’t do easily. One really powerful example is making any class thread safe easily. You can make a separate queue for, let’s say, accessing an array and anytime you wanna read from that array you just call dispatch sync with your queue when you read from the array in the block. When you wanna write to it, there’s a separate function called dispatch barrier async or sync, whichever you need. And the barrier, what it does is that it clears out everything in the queue before that block runs. It runs the block and then it runs anything else. So it’s a really cheap way to make sure that whenever you wanna write to an array or a dictionary or just a value that you only ever write when nobody is reading, and then you can read concurrently because you’re not changing anything. CHUCK: So it’s sort of a mutex or semaphore – it’s just a way of locking the data structure so that nobody else is touching it while you’re writing to it. In other words, it makes the operation atomic. JEFF: Yes, and GCD is really good at stuff like that, because it’s so low-level and it’s so built in to the system. A good example is dispatch once; it’s used in singleton initialization code a lot now. However you feel about singletons, if you’re going to make one, that’s how you should do it. When you call dispatch once, you get a block and a token and for that token, it will only ever call the block once. And if you hit that for multiple threads while that block is executing, it’ll lose weight until it’s done. And finally, once it’s finished, if you call it again, it’s just [inaudible] doesn’t do anything. So at the kernel level, it’s giving you a way to make sure that that block only ever happens once because a lot of the time, with singleton code, you might see two threads call like the shared instance method at the same time and so it’ll go through and make a new singleton twice and then the second one that runs is the one that wins. PETE: And now you have two problems. JEFF: Yes. PETE: So [inaudible] like the dispatch barrier stuff, is it locking on the object that you’re operating on, or is there some magic where – how does it know to correlate those two, the [inaudible] and the [inaudible]? Based on just the array that you read from? JEFF: It’s based on the queue that you dispatch to. PETE: Oh, okay. JEFF: The queues are pretty lightweight; they’ve actually kind of, behind the scenes, become objects, but not really, but yes because if you have [inaudible] you can’t release them. But they’re pretty lightweight; they’re lightweight enough that if you have a small amount of objects – you know, not a million – but 10, there’s almost no cost to making a separate queue for each of the important properties you wanna hit. Or even just a queue for all the properties of the object, and then they can kind of just share. PETE: And so when I’m creating those queues, do I actually – I’m actually creating – let me kind of talk through what I think you’re saying. Let’s say I have an object that I want to be accessed from multiple threads for whatever reason; so I want this object thread safe. What I could do is create three [inaudible] of that object, create a queue, and like a sign that [inaudible] that queue essentially as a property of that object and whenever I try to access stuff internally, my getters and setters would have to do stuff through those dispatch methods. Is that right or am I doing it wrong there? JEFF: Yup, that is pretty much totally right. I definitely would try to make sure that queue is never exposed outside the object, so if it’s a property is going to be in the class extension. PETE: Gotcha. JEFF: And then from the outside, all you see is a property and it’s marked non-atomic, but on the inside of the class, those getters and setters, they’d call into that dispatch code. PETE: That’s a good question. So the atomic, non-atomic doodads that you set on the property – what would be my motivation for doing this stuff manually with the queue [inaudible] or the dispatch stuff and the queue, rather than just using that atomic [inaudible], whatever that thing is called? JEFF: For that, that atomic, let’s say you have an NSArray. The pointer to the array itself is atomic, but if it’s mutable, then the stuff inside of it is not. PETE: So I could atomically access it and mess with it while someone else has atomically accessed it and is messing with the internals. JEFF: Exactly. Yes, if you have a float, you don’t need to worry about it as much. It’s a space with bits in it, but if it’s a pointer to a mutable array, then you really need to worry. PETE: Gotcha. So that’s why you should probably use NSArray rather than NSMutableArray, maybe. JEFF: Yeah. Even if the public facing property is an NSArray and the internal [inaudible] is mutable, that works fine. PETE: Except someone at some point is going to cast that NSArray into a mutable array, because they think they know what they’re doing. JEFF: Yeah and you should just stop working with those people. PETE: [Chuckles] Yeah, right. After you’ve spent two weeks trying to nail down this weird concurrency bug that only happens when your boss is watching. JEFF: Right. It only ever happens on the CFO’s phone [inaudible] 3GS, and you don’t understand why because he’s a CFO. But that’s how it always happens. And that’s another good point, is that these bugs with concurrency, they’re very hard to reproduce. You’ll see something, say, “Well I had this thing and I pushed the button and one of those times, it crashed” and that’s like as much as you get. So one of the things you wanna do for debugging purposes is anytime you make a queue, both for GCD and NSOperationQueue, you can give them a label. So for GCD, you’d use like a com.example.downloadqueue, that when you’re in the debugger in Xcode you could actually see that label when a block is running so you know what queue it’s in. PETE: You got any other tips for – like you said, reproducing these things can be –. If you do have some kind of concurrency bug, reproducing them can be a horrendous exercise. You only have tips [inaudible] tricks to reproduce these kind of these intermittent things, or tips for debugging them once you have reproduced them? JEFF: Yeah, the number one tip is, you have a data set of test data that you wanna use and you want that to be the largest data set that’s ever processed. So somebody comes to you and you’re expecting someone to have, say, a thousand items in a shoebox app, and they’ve put a million in, and you never planned for that. As soon as it happens, you wanna make sure that your data set has two million, and that everything works fine. And also for testing: I do a lot of TVD and we use Kiwi. It’s tempting; you can write these like a unit test. You can say like, “This thing should eventually equal this thing” and then you start code that schedules a block, and what we’ve found is like we’ve got a Mac Mini for our CI server and it’s really slow because it’s a Mac Mini. The timeout on that should eventually, in Kiwi, where it says ‘I’ll wait five seconds until I finish this’ and it’s not long enough on the Mac Mini because it’s so slow. So the things you do instead is you wanna task first off that the block got scheduled, that everything’s started, and then you wanna test the code that’s going to run when it finishes itself in isolation. So you could test both of those things synchronously and not – you don’t wanna test GCS or NSOperationQueue. You can assume those pretty much work. What you wanna test is your actual code. PETE: You can’t see it, but my head is nodding furiously along with all of those statements. There’s a really good pattern around that called humble object, I think it’s called, which is around trying to constrain your asynchronous stuff – you know, that stuff that makes something asynchronous or rather a different thread, or whatever. Like, try and keep that to just the planning code and have the actual work that’s being done kind of separate so you can test that synchronously and not have to deal with all these wacky issues with threading and blocking and waiting and all that junk. And then you can have a few, maybe a few high-level tests that verify that you’re using GCD or whatever correctly. JEFF: Yeah, that’s the nice thing about NSOperation, because instead of making one with the block, you can make a subclass of NSOperation. And then you just implement the main method, and that’s really easy to test on its own. In your test, you just call the main and you never have to schedule it. PETE: Yeah, I can almost say that NSOperation is an implementation of, it’s a humble object pattern, actually. JEFF: Yeah, yeah. [inaudible] JEFF: The one tricky thing with NSOperations in testing is that the state is – so you have isFinished, isExecuting, isCanceled – and all of that uses KVO. It can be tricky to test with KVO synchronously because you’ll set something and then the notifier is scheduled but it’s scheduled asynchronously, that’s a gotch you gotta watch out for but there are other ways around that. BEN: So we talked about downloading images and resizing them; we talked about coordinating access to a shared resource with variable ops – what other types of coordinating comes to mind? Are there any other types of operations where concurrency is a natural fit? JEFF: Yeah, so right now I’m actually running an app using OpenGL and it’s got several different objects. For each of these objects I need to do several things: I need to download the vortex data, I need to download several textures, then I need to load all of that into graphics memory. OpenGL – it’s kind of like core data in that for every thread you use, you have to make a separate context. So when I download all this data, I don’t want to parse a texture file or the vortex data on the main thread because I want my UI to stay responsive, so you can spin up a separate queue or a separate thread and make a new GL context, but they share the same data across them, what’s called a share group. So I can do the parsing and the loading into memory, on a separate view, and then when it’s done I can actually start drawing on the main view. BEN: That’s pretty cool. JEFF: Yeah. Basically, you want – every single method should try to get off of the main queue as quickly as possible. It’s still going to do what it says [inaudible]; your LoadData method still needs to load the data, but a secondary goal should be that that LoadData method should return as fast as possible and if it needs to schedule something in the background, that’s fine. CHUCK: You said that you need to set up a separate context for each thread – what exactly do you mean by that? JEFF: OpenGL uses a GL context – it’s a state machine. And so you call the C functions that then set the state, and they all operate on what’s the current context. And that has to exist for you to call anything. What you do is in your initialization code, before you start your task, you just make a new context. You have what’s called a share group and you make it with that share group from the main context. It’s pretty analogous to core data, where for each separate queue or separate thread you would make an NS-managed object context, and then pass around object IDs instead of objects. BEN: Yeah, that’s a pretty common pattern in core data, one that will probably bite you in development you would notice, but not always. JEFF: Yeah. BEN: And I think those are the worst types of things where you’re like, “Oh, it works. Ship it.” JEFF: Yup, and often, as developers, in crunch time, we get backed up, we just wanna ship, and so we’ll test it with a very small data set under ideal conditions. You wanna test with a large data set on the worst possible device you can run on, so if you support the second generation iPod touch, run on that with a million objects. Especially with concurrency, because the simulator is a simulator and not an emulator, it’s going to use all your cores, so if you have an eight-core MacBook Pro, it’s going to run on eight cores and that can hide problems that you see on an iPhone with two cores. And if you have the new Mac Pro with 12 cores, there’s 24 virtual cores – you’re really different from an iPhone at that point. CHUCK: Yeah, I guess. BEN: But if anyone wants a new one so I can test that out, I’ll happily blog about it. ANDREW: That’s a pretty fair trade. CHUCK: Yeah. So is the only option you have for concurrency on an iOS device threads, or can you actually fork the process? JEFF: You know, I’ve never actually tried to call fork; I would imagine that’s disallowed on iOS. I don’t think that there’s the usual UNIX-y style scenario where you’ll actually want to fork. But that is a good question. I don’t – I would say, probably not. ANDREW: We had this discussion at Cocoa Heads recently and the consensus seemed to be that that fork function is available, but if you call it, bad things happen. It doesn’t really work the way you would expect, and fork is even harder to use on a system that does support OSX. It’s not a simple beginner API kind of thing. JEFF: So if you follow the viewpoint of ‘I’m going to use the highest level API possible to succeed’ and you get all the way down to forks, then I think that’s when you just start to reconsider the problem you’re solving. CHUCK: What about on OSX? JEFF: In OSX, forks are just fine; they work as you’d expect. I used to, before I did iPhone stuff, I worked as a [inaudible] and we used a portable C Library called – I guess you’d call ‘app’ now – called Radmind. It made use of forks, and they work just fine, but a lot of the even command-line stuff, even on OSX, is moving towards GCD. So whether you have a run loop on your event-driven OSX app, you can also start an app just straight up GCD. A good example of this – if you go to the Apple open-source page, I think it’s in one of the system libraries. There’s an open-source utility called caffeinate that’s on your Mac, and caffeinate is responsible for keeping your Mac awake if you need it to stay awake. The code for that is available, and that’s an example of an app that instead of calling NSRunLoopRun, it uses GCD entirely. ANDREW: It’s called caffeine if [inaudible] JEFF: Caffeine is the menu bar app. ANDREW: Oh, okay. This is a different one then. JEFF: Yes. ANDREW: Okay. JEFF: Caffeine is a third-party app and that’s the same thing but it’s just in the menu bar. PETE: Hey, look at that! I just found a command line thing called caffeinate. I never knew that was there. JEFF: The code that it finally ends up calling is dispatch main, and that takes place at the NSApplicationMain, or UIApplicationMain that you’d be used to. JAIM: Yeah, that’s interesting. So talking about Run Loops, there used to be like a – it was more common before GCD, but you would do stuff like performSelectorInBackground or performSelectorAfterDelay, and so it would be a way to sort of, ‘I need to do something but I need to finish what I’m doing first.’ There would be some subtle bugs that you could fix sometimes by just saying performSelectorAfterDelay Zero, which we’d just schedule it for the next RunLoop, and I’m wondering if you could shed any light on how RunLoops are used. Because there’s a way to do concurrency where it’s still done on the main thread, but it’s not done in the User Interface portion of the main thread, if that makes sense. JEFF: Yeah. So the RunLoop, it sits there and loops through, as the name implies, waiting for events and those events could be timers firing, they can be the user interacting with the app, and when you do something like performSelectorAfterDelay or performSelectorOnMainThread, it’s just scheduling that to run in the RunLoop. Another good RunLoop trick is NSRunLoop CurrenRunLoop to get the current one, and then on that you call runUntilDate, and you pass in the current date. At first, it seems like that doesn’t make any sense – why would you run it until now? But what it does is it flushes the queue of everything that’s been queued up until this point. And so some of those subtle problems you mentioned where there’s the erase condition can be solved that way. Apple’s gotten pretty good at adding completion handlers to some things that cause that. One common one used to be like presenting a ModelViewController on iOS, and now when you do that they give you a completion block that you could pass in to run something when it’s done, and that’s definitely preferable, right? Anytime you see performSelectorAfterDelay or another GCD equivalent called Dispatch After, that’s a code [inaudible]. That means that there’s some kind of raised condition that you weren’t able to fix with an actual API, and so you shall look at those and if you just actually can’t solve it, file a bug with Apple or try to solve that problem in a different way. PETE: When you talk about processes and formal process, and I seem to remember – I can remember it was last year or a couple of years ago at WWDC Mac we were talking about that XPC, this kind of idea of separating up your –. They were mainly talking about it on the OSX side of things and that they had this pitch that you should make small processes that were intercommunicating with this small, fancy XPC thing, and then if they crashed, they would magically not really crash and if they got –. You could do [inaudible] separation, which I guess is what, growing up, UNIX servers used to release the attack surface of viruses and that kind of stuff. I guess that sounds kind of like what we were just talking about with forking processes, but I’m not even sure if it offers a different way to manage these interconnecting processes rather than forking, or is it [inaudible] left to the low-level UNIX-y stuff. JEFF: All the processes with XPC kind of run on their own; the nice thing is you can get an OSX launchd, which is kind of the launcher daemon that lives in the system. That will be able to restart them and launch them [inaudible] system starts, and so it’s just kind of separating all that code because the app that draws UI on your screen does not need to be the same app that installs system components, right? So it’s not so much about concurrency as it is about good code separation, but there are some nice concurrency benefits from it, because when your model is that your process sends a message to another process, like there’s asynchronicity built in right there. So if the other process spins up, it threads the fork, finishes the task, and then calls you back – that’s fine. You don’t need to know that. PETE: Presumably, a lot of those called between processes are asynchronous. Well, I guess I’m assuming that, but maybe I’m wrong. If I have like a little worker process that does image processing, for example – or maybe that’s a bad example. Let’s say, go with a security thing. Let’s say I have a small process that just responds to the same files, to disc or something, and I wanted to from my main process, kind of ask it to save the file. Would I kind of fire that off and wait for it at some point in the future to come back and say, “Hey, I’m done” or “can I still keep my asynchronous programming model, just call a new method and having it return a value?” JEFF: Yeah, and this is where the fact that I don’t use XPC a lot, it’s going to bite me. But one thing you can do is use distributed objects, so you can have one process with an objective-C object that sends a message to another one, and that other object happens to live in this other process, and just browsing the documentation, that works fine. But you can also send like a sigkill or something like that. So not using a whole lot, I’m going to say yes, you can do both. ANDREW: There are actually kind of two APIs for XPC and the newer of the two which came out in 10.8 is all objective-C; the older one is C only. With the objective-C API, you can send messages to objects in the other process as if they were just objects of your process, but there is a restriction of those methods – all must have a return [inaudible]. So if you wanna pass data back from a method, it is done asynchronously. You have to do it using a completion blocker, that kind of thing. I don’t know if I articulated that well, but. PETE: That makes sense. So I can’t ask a process to do something and get the result back immediately, but I can ask the process to do something and know that the other process received that message and is doing it. ANDREW: Yeah. PETE: Okay. Is any of this stuff available for iOS? I know that for system stuff – I think it was with iOS 6, they started using XPC for things like the mail, the stuff where you’re invoking kind of system-y things like sending mails, or browsing photos or whatever. As far as I remember, there was not actually any way for a lowly, non-Apple developer to create their own XPC things. JEFF: Yeah, and that’s still the case. The only way that’s officially sanctioned to send data from one app to another is either a URL scheme – so I launch this URL and the system switches apps to this other app and does something with it – or you can use inter-application audio to send audio data, and there’s an app called Audiobus that’s really nice for automating that, but there’s no straight up XPC that you can use. So if you can actually look at the classes and you look at what’s going on, that view controller that comes up for make the new MailMessage that is XPC and it’s like a UIRemoteViewController, and that would be a fantastic API for us to be able to use, but at least right now we can’t. PETE: It’s also really frustrating to maintain your [inaudible] automation framework because suddenly, you can’t actually automate any of that UI, which makes people who use your test automation framework very mad at you. JEFF: Yes, and that’s another case where the best you could hope for is this thing was called correctly, so when your test – that’s an opportunity for a [inaudible]. PETE: Unfortunately in my case, it’s a UI testing framework, so literally the test is like, tap this thing over here, fill out an email address, etc, etc, which for email is okay, because you would be sending emails anyway, but for some of the other stuff it’s like, suddenly I can’t test photo browsing anymore because Apple introduced this new thing. JEFF: Right, and you could go as far as to have compiler flag that’s different when you’re going to run it in that test harness, but at that point you’re not actually testing your production code – you’re testing your test code. PETE: Yeah, right, which is – it was always the tradeoff with that. I guess you’d hit that tradeoff for a lot of concurrency things and testing. At some point you’ve got to kinda decide whether you really want to test it or whether you’re happy with – like you said earlier, trusting GCD to do its job or trusting the frameworks to do their job. JEFF: Yeah, and one way you can kind of get around that is you can actually mock an NSOperationQueue, so you could stub out the main queue method and return your mock queue and then make sure that things get added to it correctly. And it seems like when you’re doing it in queue, you’re actually replacing the method that main queue calls with a different one, and you’re like, “Is this going to work? Is this going to completely hose my system?” but it works just fine. PETE: Queue’s pretty magical, I guess. I wonder how they do that? I have no idea. I guess they know – they can look into the – because they’re the test runner as well, they can remove any monkeying around that you did at the end of the test, so that you’re not permanently hosing your –. You’re not doing old-fashioned methods [inaudible]; you’ve now kind of removed, you kind of shut the door on yourself or whatever. JEFF: Yeah, so the way that it works is they do a lot of implication parsing and manual message forwarding, and they keep a stack of all the changes you made, and then when that block exits, it pops the stack and it undoes those changes. PETE: Neat. CHUCK: So one question that I run into a lot is generally, for most apps that I write, they’re pretty simple and so having something block the main thread for half a second or something – or not half a second, but a few milliseconds doesn’t hurt anything. At what point do you really need to start considering whether or not you need this kind of concurrency? JEFF: I’d say that two milliseconds absolutely does mean something, because when you’re, if you’re trying to animate something like a TableView at 60fps, then, if you do the Math you have – What is it? 16.7 milliseconds per frame to do your work? And so you really don’t have a lot of time to actually do all the animation and if your drawing takes 10 milliseconds or your data processing takes another four or whatever, then you’re already up against that limit and you’re going to start dropping frames if you add any more work in there. BEN: Yeah, somebody who’s fussing with openGL shaders, I literally had 1/60 open in spotlight to get that same number out, because it definitely is important if you don’t ever want to drop a frame. And it’s a tradeoff, right? If your application is going to be used by 15 people in an internal department where they just verify stuff with their phones, sure – two milliseconds is fine. But if you look at Facebook and the paper app that just came out, they’ve got potentially – what do they have, like a billion users? They need to make sure that it’s going to look really nice on everybody’s device on GL open devices that have weird background processes running, and on the oldest ones they support. And as far as when you should decide to start investigating this, it all starts with using instruments, especially there’s the time profiler one, where you can actually see how much of your app is spent in each method. It’s just like, find the expensive parts of your app and either make them less expensive by better algorithms or put them on the background queue so that it’s okay if they take a while. I’m definitely not a fan of just making things concurrent because they can be; you should always write the code for a reason, to solve an actual problem. ANDREW: I think there are two rules of concurrency using threads: first is never do it, second is 01, I think. [inaudible] [Chuckling] ANDREW: The third rule, if you really, really need to, then probably not anyway. But if you have to –. PETE: [inaudible] Don’t do it. JEFF: The only code I can think of that I use right now that actually uses NS thread – [inaudible] AFNetworking maintains its own thread to listen for network callbacks, and that’s pretty much it. Everything else that I do can be expressed as an individual discreet tasks using operations or just GCD. JAIM: Do you recommend using operations for even simple things, like bringing up a ViewController, I’d view some relatively long running process half a second, and then pop to the main thread and do something. Do you recommend doing an NSOperationQueue or NSOperation for something that simple? JEFF: Obviously it depends on what you’re doing, but if the thing I’m doing is, let’s say, load some cache data and display a ViewController – the nice thing about even using NSBlockOperation to do some lightweight operations with blocks right there is that it’s really easy to re-factor. So if I make this [inaudible] operation, and then I set that dependency of the showing the ViewController code. At any point in the future, I could re-factor that out to be its own subclass of NSOperation without going through and changing all the plumbing of how the system works. So instead of defining the block right there, I would just say make a new operation subclass of this kind. So even for simple things, doing it that way when you need to do anything asynchronous sets you up in the future for success. ANDREW: Okay. So down the road you can make modifications as needed. JEFF: It also avoids problems with every objective-C developers’ first completion block where they retain self inside the block and the compiler warns them that they shouldn’t do that. It avoids those kinds of problems because you’re separating all things you need to do. ANDREW: Okay, very cool. JEFF: Then another good thing of the operations is with those cross queued dependencies –. Let’s say you’re developing the app locally, and then you wanna use network data, so you still have your display the ViewController code that’s scheduled on the main block, or the main queue, and it’s got a dependency of this loadCacheData operation, and you can go in and add another operation, which is the load the data off the network operation and then set that as a dependency of the CacheDataLoading operation. And so without changing the initial code that shows the ViewController you’ve added a separate network layer. ANDREW: Okay, so at that point you need to start changing operations together that allows you to do that pretty easily. JEFF: Yup, and another common thing I see with GCD where it falls down for people is waiting for a bunch of operations to finish. So if I have 10 objects and I need to make a separate network call for all of those, then when all of those are done I need to do something else. It can be a little bit tough with GCD to get the mechanics right on doing that. There are GCD semaphores you can use that control access, but it’s a lot easier to just make an array of operations and add them all as a dependency. JAIM: That’s a good approach. My first approach would be to yell at the architect. CHUCK: [Chuckles] JEFF: Yes, there’s that. Unfortunately we don’t always get to [inaudible] the web services to. JAIM: Nope, but in a perfect world, everyone does what I say. JEFF: Yes, yes. We have some clients where they’ve got people’s right web services, but they ask us, “What should this JSON output look like?” and we really like those clients. PETE: Got one last random question. Is there any kind of considerations to do with kind of battery life around this? Am I going to potentially drain my users’ battery life more if I’m using all of this fancy-pants, multi-threaded stuff? JEFF: Yeah, so you definitely want to be mindful of that. You wanna run your app through the battery-life instruments. At the end of the day, there’s stuff that you need done, but there’s things you can do with concurrency like batching all your network requests to be together. Some of the new things Apple’s doing in mavericks, like with timers, that you can say fire at this data but it’s okay to give it a little bit of wiggle room. For iOS, worry about the battery life, but first off, worry about doing what you actually need to do and then see if you can optimize it by doing things together more than separately. PETE: Gotcha. That makes sense. So it’s all like the coalescing stuff that they’re doing now is turning on the radio once rather than five times. JEFF: Exactly. CHUCK: Alright, well let’s go ahead and get to the picks. Andrew, do you wanna start us off? ANDREW: Sure. I’ve two picks today. My first one, it’s actually issue 2, which is actually a little bit an old issue now of objective-C.io. If you haven’t read objective-C.io, it’s worth going through and reading all their issues. I don’t know how often they do this, but they do an in-depth issue about a single topic in objective-C and Cocoa programming, and issue 2 was all about concurrent programming, so I think it’s relevant and very well-done and it talks about all kinds of things related to concurrency in some depth. My second pick is CocoaHeads. CocoaHeads is an international, sort of really loose, international organization of local chapters that meet once a month for presentations, to hang out and whatever, and talk about Cocoa and objective-C. I go to CocoaHeads every month; it’s how I know Chuck, and it’s worth checking out if you’ve never been. It’s a great way to meet new that are doing the same things you’re doing and learn new things. Those are my picks. CHUCK: Awesome. Pete, what are your picks? PETE: Okay, my picks. My first pick is something I mentioned earlier, this is the Humble Object Pattern. This is captured in a book called XUnitPatterns, which is this huge encyclopedic book; it’s quite intimidating, but luckily it has a nice website and you can go and kind of dip in and out of it. And because it’s patterns, you can just read one and then you don’t have to kind of – they’re kind of small, self-contained little ideas, and humble object is a good one. It’s kind of a little bit like NSOperation kind of embodies this, so I’ll add a link to the page on the XUnitPatterns website. People can read that if they feel excited about humble objects. My other pick is a bit of a mind-expanding one. Talking about all this stuff about concurrency makes me think about shared mutable state and how that’s a bad thing, and it causes us no end of pain and if we could only not mutate state than our lives would be so much better in a concurrent world. Rich Hickey, who’s the guy who wrote Clojure, understands this very well and he has a very good presentation where he talked about how Clojure manages these things called persistent data structures. So these are also called shared-state data structures. They have nothing to do with persistence in the database in the sense of the word; they’re more about making things that are mutable and very cheap to copy around and change. It’s really, really cool stuff if you’re into that kind of thing. Then my last pick is a food item. I’ve recently been really enjoying Trader Joe’s Chile Spiced Mango. It’s really nice, it’s spicy, it’s sweet, it fills you up. So that’s my last pick. CHUCK: Alright. Ben, what are your picks? BEN: So I was really depressed over the news this weekend when Philip Seymour Hoffman died; he was one of my favorite actors. I’d like to pick two of his movies which are just good movies in general but his role in those movies were awesome. The first one is Big Lebowski, and the second one is Almost Famous. So if you are not a Philip Seymour Hoffman – or if you don’t know a lot of his movies, you’ve probably seen 10 or 20 already. He’s just got an impressive career of movies, so go watch a movie this week. My third pick will be a beer pick. I’ve been really enjoying Union Jack IPA by Firestone Walker. It’s delicious. PETE: Woohoo! That is a very good beer pick, Ben. BEN: I aim to please. PETE: I’ve drank one of those last night and I was thinking to myself, “I should pick this beer.” [Chuckles] I pick you. CHUCK: Alright. Jaim, what are your picks? JAIM: Okay, I wanna +1 Ben’s pick on his role on Almost Famous. It’s a bit odd because he played Lester Bangs and they passed in similar circumstances, so that’s kind of an odd coincidence. [Inaudible] a beer pick. So I was at the store picking up one of my favorite breweries is Weyerbacher, and they’ve got a Belgian triple the do called Merry Monks. It’s pretty fantastic and I had forgotten that I had liked it so much when I bought it a year ago and I’m like, “Oh, they made a triple!” and I’ve looked at it, I bought it, I tasted it, “Wait a minute! This is that fantastic thing I drank a year ago.” It’s very good, so try it out. Weyerbacher Merry Monks. Those are my picks. CHUCK: Alright. I’ll jump in here with a few picks, one of them was something that was referred or recommended to me the other day. It’s called focusatwill.com, and basically it’s this – I don’t know exactly how to describe it. It’s kind of a music service, except they actually have a science page here – I’ll put the link to that in here as well – that explains the science behind how they pick the music, but what they basically explain is that it helps you shut down your limbic system and stuff. Anyway, it sort of shuts of your fight or flight center of your brain and allows you to focus more. I’ve been trying it for the last day or two and I really do feel like I can focus more, but I’m not completely convinced yet whether it’s the science, whether the science is good, because I don’t know anything about neuroscience at all, or whether or not there’s some kind of placebo effect or something else but anyway it’s kind of scary how focused I did get yesterday at one point while I was listening to the music and just working through some of the stuff that I was doing. But anyway, I think it’s pretty cool, and I’m probably going to keep on using it, so I’m really, really enjoying that. Another pick I have is Discourse, which is an open-source forum software. It’s written in Ruby on Rails with an Ember.js front end; it’s easily the best forum software to use. I’m actually using it for the Ruby Rogues Parlay Group, which is a discussion forum on Ruby, centered around the Ruby Rogues podcast. I also have on for the JavaScript Jabber podcast. I’m considering opening one for this show so we can discuss iOS stuff in a forum on a regular basis. If you’re interested in that, just send a tweet to @iPhreaks and just let us know that you’d like to have that kind of a forum. The way that we’ve done that on the other shows is that you can either sign up for various levels; the lowest level is $10 a year. We found that that’s basically enough to keep the trolls out, and then you can choose a higher level of donation if you wanna get back to the show. And so if you’re interested in that then I’ll go ahead and set that up, but I wanna make sure there’s enough interest to get a bunch of people into the forum and get some discussions going, so it doesn’t kinda turn into this dead forum. ANDREW: What about the trolls that are actually on the show? [Laughter] ANDREW: How do you keep them out? CHUCK: With a big stick. So yeah, those are my picks. Jeff, what are your picks? JEFF: The first one’s going to be hardware. I got a Fitbit Force recently, and it’s really, really nice. The nicest thing about it is the sleep tracking, so I just hold down the button before I go to sleep and then when I wake up I can see a graph of how I slept. Totally found out that I wake up every morning for a minute, no idea what’s going on but that’s what I do. My next pick is a beer. It’s New Holland Dragon’s Milk; it’s a Bourbon Barrel Stout, and it’s fantastic. They come in like the big bottles at the store, and one of those gets me pretty sleepy, it’s 10%. It’s a Bourbon Barrel Stout where they make a stout and they age it in the bourbon barrels. New Holland also makes a Beer Barrel Bourbon, which is basically the reverse, and that’s also fantastic. PETE: That’s awesome! I’ve never heard of anyone doing that. BEN: So they just take the two barrels and swap them. JEFF: Exactly! JAIM: Makes a lot of sense when you say it that way! JEFF: Right? It’s like Amazon selling their excess CPU cycles, you know? [Chuckling] You know, we have this bourbon barrel already, we might as well do something with it – bourbon that was inside of it or whatever. JAIM: That bourbon had to go somewhere. CHUCK: Yeah well I heard an interesting fact that that barrel of ale actually came from your barrels that used to ship with the beer barrels way back in the day. Anyway, well thanks for coming Jeff. JEFF: Yeah, thanks for having me. CHUCK: It was an interesting conversation. Hopefully we’ve helped a few people figure out how to approach these problems, because they’re not always simple. I’ve found that concurrency sometimes, you just dig the hole faster. JEFF: Yes. CHUCK: You do more [inaudible]. ANDREW: [inaudible] get two shovels. CHUCK: Yeah, and if people have any questions, what’s the best way for them to find your [inaudible]? JEFF: So the fastest is probably on twitter, @SlaunchaMan, which is like the letter ‘S’-launch-a-Man. I’m also [inaudible] on app.net. CHUCK: Alright. Thanks again. We’ll wrap the show, we’ll catch you all next week!

x