I’m just finishing up my first app store bound project that was written in Swift. It’s nothing hugely exciting, just a giant calculator sort of application. Why I chose Swift is that Swift’s static typing really made me think about the data layer, and how data flows through the application. What I missed terribly was KVO/KVC, and I’m not alone. Brent Simmons has also mentioned this, but as someone who’s used a lot of KVO and KVC over the years, I find that it’s helped me ship code a lot more quickly, and has been one of the most valuable features of the Mac frameworks. A lot of developers who are new to the platform aren’t aware of these constructs.

The idea is something like this: We’re done a really good job of optimizing the model layer of Model/View/Controller applications. And Swift has done an amazing job. Static typing provides huge advantages in reliability and coherency. But the Obj-C philosophy is really about re-usable components. In that philosophy, components written by one vendor need a way to seamlessly talk to another, and this is really where Swift and static typing fall flat. A view from one vendor or component needs a way to render data from a model from another component. We find this even in the system where a component like CoreData needs to be passed into a controller where it needs to be searched, or…

Hold on. I can hear the Swift developers already. “We have protocols and extensions for that! I can make a component from one source talk to a component from another source. All I need to do is define a protocol in an extension and I can have my static typing and everything!”

Ok. Let’s go down the rabbit hole.

The Swift Protocol Route

Let’s take a classic case that is actually a scenario that Apple shipped on the Mac in Mac OS 10.4. I want to have a controller, that given an input of an array of types, will filter the array based on a search term and output the result. The key here is my search controller doesn’t know the input type beforehand (maybe it came from a different vendor) and my input types don’t know about the search controller. I want to have a re-usable search controller, that I can use across all my projects, with minimal integration effort to save implementation time.

Using protocols, you might define a new protocol called “Searchable”. You extend or modify your existing model objects to follow the protocol. Under the “Searchable” protocol, objects would have to implement a function that receives a search term string input and return true or false based on if the object thinks it matches the search term. Easy.

But there are a few problems with this approach. The controller has become blind to how the search is actually performed, which isn’t what I wanted at all. The idea was that the controller would perform the search logic for me so I didn’t have to continuously rewrite it, and now I’m rewriting it for every searchable object in my project. If I need search to be customizable, where the user was selecting which fields they wanted to search, or selecting options like case sensitive or starts with/contains search, those options now need to be sent down into each Searchable object, and then logic written in each object to deal with that. Reusable components was supposed to make my code easier to write, and this sounds worse, not better.

Maybe I could try and flip this around. Instead of having extensions for my objects, I can have a search controller object that I subclass, and fill in with details about my objects. But I’d still have the same problem. I’m writing a lot of search logic all over again, when the point is I want to reuse my search logic between applications.

(If you’ve used NSPredicate, you probably know where this is going.)

Function Pointers

Alright, so clearly we were trying to implement this all in a naive way. We can do multiple searchable fields. When the search controller calls in to our Searchable object, we’ll provide it back an map of function pointers to searchable values, associated with a name for each field. This way all the logic stays in the controller. It just has to call the function pointers to get values, decide if the object meets the search criteria, and then either save or discard it. Easy. And we are getting closer to a workable solution, but now we have a few new problems.

Live search throws a wrench into this whole scheme. Not only do we need a way to know if an object meets the search criteria, but now we also need a way of knowing if an object’s properties have changed that could make it’s inclusion in our search change. This is especially important if I have multiple views. Maybe I have a form and a graph open for my model objects in different windows. If I change an entry in the form, I’d want the graph to possibly live update. And the form view and the graph view might have no knowledge of each other. So we need a way to callback to an interested observer of the object when a value changes. We could use a timer to check every second or so for changed values, but in some scenarios that could be a very expensive and needless operation. So while that would work, performance and battery life would significantly suffer. And it’s more code we don’t want to write.

There’s also the issue of nested values. Maybe what I’m searching are objects that represent employees, but now I also want to search on the name of the department employees belong to. In my object graph, it’s very likely that departments will be another model object type that will have a relationship with employee objects. So now I’m not just looking at returning function pointers to not just my employee objects, but department objects they belong to. And now I need to communicate changes not only in my object’s own values, but changes in it’s relationships to other objects.

Also there is the small issue of this approach not working with properties. As far as I know, you can’t create an function pointer to a property. So now I need to wrap all my properties with functions.

This is getting complicated again. Once again I’m writing a lot of code, and not saving any time at all. There has got to be a better way.

Key Value Coding

Well fortunately after years of going through this same mess in other languages, Apple came up with Key Value Coding as a solution.

Key Value Coding is extremely simple: It’s a protocol that allows any Obj-C object to be accessed like a dictionary. It’s properties (or getter and setter functions) can be referred to by using their names as keys. All NSObject subclasses have the following functions:

func valueForKey(_ keyString) -> AnyObject?
func setValue(_ valueAnyObject?, forKey keyString)


(Reference to the entire protocol, which contains some other interesting functions, is here.)

Now my search controller is easy. I can simply tell the search controller all the possible searchable properties like so:

class Employee: NSObject {
    dynamic var lastName: String?
    dynamic var firstName: String?
    dynamic var title: String?
    dynamic var department: Department?
searchController.objects = SomeArrayOfEmployees
searchController.searchKeys = ["firstName", "lastName", "title"]


Now I can have a generalized search controller, that I can share between projects or provide as a framework to other developers, that doesn’t have to know anything about the Employee object ahead of time. I can describe the shape of an object using it’s string key names. Underneath the hood, my search controller can call valueForKey passing the keys as arguments, and the object will dynamically return the values of it’s properties.

Another great example of the advantages of keys is NSPredicate. NSPredicate lets you write a SQL-like query against your objects, which is harder to do without being able to refer to your object’s fields by name.

There is a catch. If you’re a strong static typing proponent, you’ll notice that none of this is statically typed. I’m able to lie about what keys an object has, as there is no way to enforce the name of a key I’m giving as a string actually exists on the object before hand. I don’t even know what the return time will be. valueForKey returns AnyObject.

Quite simply, I don’t think static typing helps this use case. I think it hurts it. I don’t see a way to make this concept workable without dropping static typing, and I think that’s ok. Dynamic typing came about because of scenarios like this. It’s ok to use dynamic typing where it works better. And all isn’t lost. When our search controller ingests this data, if it’s a Swift object, it will have to transition these objects back into a type checked environment. So even though static typing can’t cover this whole use case, it improves the reliability of using Key Value Coding by validating that the values for keys are actually the type we assumed they would be.

Key Value Paths

There are a few problems KVC hasn’t solved yet. One is the object graph problem that was talked about above. What if we want to search the name of an employee’s department? Fortunately KVC solves this for us! Keys don’t just have to be one level deep, they can be entire paths!

The KVC protocol defines the following function:

func valueForKeyPath(_ keyPathString) -> AnyObject?
A key path of the form relationship.property (with one or more relationships); for example “department.name” or “department.manager.lastName”.


Hey look, that’s uhhhh, exactly our demo scenario.

So now I can traverse several levels deep in my object. I can tell my search controller, after some modification, to use a key path of “department.name” on my employee object.

searchController.objects = SomeArrayOfEmployees
searchController.searchKeyPaths = ["firstName", "lastName", "title", "department.name"]


Now internally, instead of calling valueForKey, my search controller just needs to call valueForKeyPath. I can use single level deep paths with valueForKeyPath with no issue, so my existing keys will work.

Notice that valueForKey and valueForKeyPath are functions that are called on your object. I’m not going to do a deep dive right now, but you could use these to implement fully dynamic values for your keys and key paths. Apple’s implementation of this function inspects your object and looks for a property or function that’s name matches the key, but there is no reason you can’t override the same function and do your own lookup on the fly. It’s useful for if your object is abstracting JSON or perhaps a SQL row.

It’s also important that this works on any NSObject. I can insert placeholder NSDictionary objects for temporary data right alongside my actual employee objects, and the same search logic will work across them. As long as the object has lastName, firstName, title, and department values, the object type no longer matters.

Key Value Observing

Well all that’s great, but we still have one more issue. We need to know when values change. Enter Key Value Observing. Key Value Observing is simple: Any time a property is called, or a setter function is called, a notification will automatically be dispatched to all interested objects. An object can signal interest in changes to a key’s value with the following function:

func addObserver(_ anObserverNSObject,
      forKeyPath keyPathString,
         options optionsNSKeyValueObservingOptions,
         context contextUnsafeMutablePointer<Void>)


(It’s worth checking out the other functions. They can give you finer control over sending change notifications. Also lookup the documentation for specifics on the change callback.)

Notice that the function takes a key path. An employee’s department name will not only change if their department’s name changes, but also if their department relationship changes. This covers both cases by observing any change to any object within the “department.name” path.

It’s also worth checking out the options. We can have the change callback provide both the new and old value, or even the inserted rows and removed rows of an array. Not only is this a great tool for observing changes in objects that our class doesn’t have deep knowledge of, but it’s just great in general. This sort of observing is really handy for controlling add/remove animations in collection views or table views.

In our search controller, we just need to observe all the keys we are searching in all the objects we are given, and then we can recalculate the search on an object by object basis. There are no timers running in the background, this change can fire directly from an object’s value being set.

So what’s the problem in Swift?

I’ve mentioned one problem already: Only classes that subclass NSObject can provide KVO/KVC support. Before Swift, that wasn’t a major problem. Now with Swift, we have non-NSObject subclasses, and non class types. Structs can’t support KVO/KVC in any fashion.

The properties/functions being observed also have to be dynamic. Again, not a problem in Obj-C where all functions and properties are dynamic. But not only are Swift functions not dynamic by default, some Swift types are not supported by dynamic functions. Want to observe a Swift enum type property? Can’t do that.

Even more worrisome, the open source Swift distribution could possibly not include any dynamic support, and KVO/KVC are defined as part of the Cocoa frameworks, which aren’t likely to be included with open source Swift. Any code that wants to target cross platform Swift might be forced to avoid KVO/KVC support. Ironically, just as we could be entering a golden age of framework availability with Swift, we might be discarding the technology which makes all those frameworks play cleanly with each other.

So what would I like to see from Swift?

  • Include KVO/KVC functionality as part of core Swift: The current KVO/KVC are defined as part of Foundation. They don’t need to be moved, but Swift needs an equivalent that can bridge, and is cross platform.
  • Have more dynamic functionality on by default: Another issue is that dynamic functionality is currently opt in. This is for a good reason: things like method swizzling won’t work with Swift’s static functions. But Apple could split the difference: Allow statically linked functions (and properties) to at least be looked up dynamically. This would allow functionality like KVO and KVC to work without giving up direct calling of functions or opening back up to method swizzling.
  • Have the KVC/KVO replacement work with structs and Swift types: Simple. Enums in Swift are great. Now I just want to access them with KVC and observe them.

At work, we support a lot of platforms. We support iOS and Android, Windows, Linux, supermarket checkout scanners, Raspberry Pis, old Windows CE devices, and more. And all the devices run our same (large) core code, and all that code is written in C++. I’m not the biggest fan of C++. But there’s no doubt when we need to write something that works across a range of platforms, it’s a rich, commonly understood tool. It’s also been a massive blocker for Swift adoption for us.

For our mobile customers, we do provide both Java and Obj-C APIs. They’re both just wrappings around our C++ core, and they do the conversion from all the Obj-C or iOS native formats into the raw buffers we need to handle in our C++ core. Whenever I look at doing a Swift native SDK in the future, I’m still stuck on not having native C++ support from Swift code. In order to provide a pure native Swift API in the future, I’d have to wrap our ever growing C++ source base once in Obj-C, and then wrap it again in Swift. It just doesn’t make sense to wrap the same code twice over. Continue reading

As WWDC fast approaches, and rumors of the next OS X update focusing on polish persist, I thought I’d go over my wish list for what I’d like to see Apple address.

Vulkan/Enhanced Graphics Support

The graphics situation on OS X is bad. 3D graphics on OS X have been rocky from the beginning (the first ever public OpenGL demo on OS X, which sadly I cannot find video of), but Apple in the past was willing to participate in a benchmark war with DirectX. At WWDC 2006, I remember Apple was still rolling out new features to try and compete with DirectX’s performance. And with 10.6 Apple introduced OpenCL which went on to become an industry standard for GPU based computation.

But recently, while Linux and Windows have had continuous cycles of improvement for 3D graphics, OS X has been seeing improvements in fits and starts. Mavericks, which moved to OpenGL 4, looked like it might be a recommitment to improving 3D graphics on OS X. But Yosemite didn’t seem to bring any real improvement to OpenGL at a time when OpenGL on the platform is already seriously lagging.

There is some speculation that Apple will bring Metal to Mac OS X, which from what I’ve heard seems like a strong possibility. Metal on Mac OS X might provide a direct route to high performance 3D and 2D graphics, but Apple might have a long road ahead in convincing developers to adopt Metal, and GPU manufacturers to write drivers for it. The driver quality on OS X for OpenGL is already so hit and miss it’s hard to believe that Apple would be able to get AMD and Nvidia to write a good driver for Apple’s proprietary Metal platform, when the OpenGL driver quality is already not that great. I have no doubt that Apple would be able to get a few software vendors on stage to demo Metal support on the Mac (Epic seems like a good possibility), but there are several third party vendors (like Adobe or Valve) that I see not being eager to have to support another standard. I could see Adobe dragging their feet for a long time on supporting Metal, or possibly not supporting it ever.

Recently the next version of OpenGL called Vulkan was announced and I think this would be a great chance for Apple to really improve cross platform graphics on OS X. Vulkan already has strong support from Valve (which is huge for a developer that used to be strongly in the DirectX camp), and is more likely to draw support from Adobe. The word is Apple is still on the fence about supporting Vulkan on the Mac. Vulkan supports the same capabilities as Metal, which puts it in an awkward position on OS X. But there will be a lot of smaller developers that won’t be able to put forth the effort to port to Metal, especially in the professional software community, so it would be foolish not to support Vulkan. Vulkan also promises a simplified driver architecture, which could be a boon for the traditionally complex development of OpenGL drivers on Mac OS X. I’d even suggest that maybe Apple should be spinning down development of Metal and really focusing on Vulkan instead, but I think the politics of Apple wouldn’t really allow for that.

At the very least, seeing support for OpenGL 4.5 in Mac OS 10.11 would be ideal, but it would be great to see Vulkan support. Metal would be an improvement as well, but I worry it would further alienate some developers of both professional software and games if only Metal were present as a modern API.

A Modern Window Server

(I don’t have visibility into the OS X window server, so this section is based on speculation on how Apple has implemented some parts of Yosemite. Please let me know if there are any corrections to be made.)

OS X’s window server is another component that had a very impressive start, looked to have a very impressive future, and then suddenly seemed to have public facing development stop while competing platforms passed it by.

The very first version of the Mac OS X window server did every bit of it’s drawing on the CPU. That made sense at the time. Great dedicated GPUs were still not common on the hardware OS X supported, and the OpenGL support required may not have been present either. But as a result, basic things like dragging windows across the screen was choppy on the first versions of Mac OS X.

In 10.2, Apple added something called Quartz Extreme. While everything inside of a window was still drawn on the CPU under Quartz Extreme, the contents of the windows themselves were stored on the GPU, meaning operations like moving a window around on the screen became very quick. There was a further project called Quartz 2D Extreme (later QuartzGL) which would have drawn the contents of the windows themselves with the GPU. This project was comparable to Windows Vista’s Windows Presentation Foundation which promised similar features to unlock fancy effects such as windows with a glass like transparency. While WPF was the source of a lot of Vista’s early performance issues, high requirements, and consumer confusion (Microsoft had two tiers of Vista certification for hardware, and one was not compatible with WPF), Microsoft’s investment in a fully GPU accelerated window server eventually paid off with a fast and responsive user interface that is capable of rendering advanced effects with ease.

Apple’s QuartzGL project eventually fizzled out due to performance issues (some interesting benchmarks are still posted here). With 10.11, it might be time for Apple to take a second look at extending QuartzGL further. Microsoft has pushed through their performance issues, and many of the visual effects Apple is trying to do with Yosemite could be speed up by GPU acceleration.The “glass” vibrancy effect OS X is using doesn’t seem to be GPU accelerated, which might be leading to some performance issues (I sometimes see vibrancy backed views “lag” behind their window being drawn which leads me to believe the vibrancy effect isn’t drawn as part of window compositing on the GPU.) GPUs are really good at these sorts of effects, and it would be great to have a window server that could do things like flag a portion of a window to the GPU as transparent, and then attach a shader to that portion of the window. As with Windows, not every element on the display would have to be drawn on the GPU. Certain views could opt in to GPU backed drawing (such as the vibrancy view), which would help the performance of views that could benefit, without hurting the performance of other views.

My understanding is that QuartzGL is still present in some form in Yosemite, and that applications can opt in to it, but it would be great to see QuartzGL taken a few steps further to allow the advanced OS X vibrancy effects to be GPU backed.

Networking and WebKit Enhancements

Yosemites’ networking woes have been well documented at this point, but I’ve also noticed a decline in Safari’s quality. HTML5 web views in particular don’t perform right, the controls always seem to be in the wrong place, the video is scaled incorrectly, and sometimes they don’t play at all. I’m having to open Chrome more and more often these days and I really wish I wouldn’t have to.

Pro Hardware

As a follow up to my post about the Mac Pro, I had one more thing that I wouldn’t mind seeing at WWDC, but it’s really a long shot. If Apple doesn’t really care to continue maintaining their pro hardware, they should really think about licensing OS X. It doesn’t have to be a license free for all. Maybe they could only license to certain HP product lines. And Apple hasn’t been against licensing in the past. Apple’s failures in markets like the server market weren’t necessarily because OS X made a poor choice for servers, but because the company didn’t seem to be willing to provide the support services required for that market. Apple’s Pro hardware may be in a similar place.

If Apple really comes out supporting Pros over the next year in a big way, I’ll gladly eat my words. But I’m becoming more and more convinced that licensing OS X would allow Apple to keep users who are happily buying Apple software (and would be paying Apple licensing fees) as an alternative to having them leave the platform entirely would be the best course of action. If Apple isn’t willing to dirty themselves with towers that can be opened and customized, or 1U rack mount servers, they could let someone else serve those markets for them.

End of Sandboxing For the Mac App Store

The Mac App Store is slowly dying, and sandboxing is a big reason. Apple either needs to fix sandboxing, or remove it as a requirement. This is again a place where Windows 10 is pulling ahead of Apple.

Real Exchange Support on OS X

Microsoft Outlook, especially the new beta version, provides a good Exchange client on Mac OS X. But I’d love to see the built in support enhanced. Mail.app still doesn’t support push email. The Exchange client in iCal still sucks (better meeting availability assistant, please.)


A lot of the suggestions I’m making are not user facing, but are instead foundational. OS X hasn’t had many foundational enhancements in almost 10 years (not counting security orientated things like Gatekeeper.) Microsoft has put a lot of working into their foundations that cost a lot of time and money that Apple has so far been unwilling to commit. My worry is that if Apple doesn’t concentrate on the less glamorous aspects of OS X, like graphics performance, the user experience will continue to decline. Apple is simply stacking too many new features on an aging foundation. Beyond stabilizing the features they already have, Apple really needs to look at building a foundation for the next 10 years of the Mac.

I’ve been looking at replacing my 2008 Mac Pro, and as much as I’d like to replace it with a new Mac Pro, the Mac Pro just looks so odd when you compare it to both Windows products and other Macs.

The most visible missing feature that has been widely commented on is the lack of an Apple 5k display. I don’t really want to buy a Mac Pro at the present time because I’m pretty sure it’s going to get outdated by Thunderbolt 3, and I’m not sure what the 5k support picture will look like for the current Mac Pro in the future. Continue reading

(There are Star Trek spoilers here.)

My girlfriend and I have been watching  Star Trek: The Next Generation in order, from the beginning. This morning we just got to “All Good Things”, the final episode of The Next Generation. I’ve spent the rest of the day thinking about why Star Trek used to be so popular as a TV series, and why it feels like it’s lost it’s way. And then I had a weird thought. Star Trek had certainly had it’s up and downs after Next Generation went off the air in 1994, but after the 90s ended, things went really badly. Star Trek Enterprise, the last TV series, was prematurely canceled. Plans for a fifth Next Generation movie fell through after the fourth, Star Trek Nemesis, was a commercial failure. And it wasn’t just that people were over Star Trek. Star Trek, with the same writers, and the same producers, had seriously declined in quality. So what happened? And then I had a totally crazy thought. Could it have been 9/11? I know, crazy. But after 2001, something about Star Trek got weird. Star Trek got unnaturally… dark.

Continue reading

The release of the iWatch SDK has given developers a head start in adding WatchKit to their applications, but with the release of the iWatch Simulator, they’ve also given developers a valuable insight into how the iWatch API stack is shaped. The WatchKit framework runs on the iOS device, but to power the simulator Apple has also added many of the device side frameworks. With a little elbow grease, we can learn a lot about the iWatch’s software stack.

Continue reading

One of the basic Objective-C design paradigms is Model-View-Controller.

Views is views typically have two components. There is the logic that drives a view: formatting, management of subviews, and lifecycle management. And there is appearance of the view. This might include the positioning of subviews, the color of the view and subviews, how the view scales, and so on.

Continue reading