Wednesday, February 27, 2008

Functional Programming, the opiate of some people (or... Tripping on CAL)

If you don't like opinion pieces on the joys of particular programming styles/languages - don't read on ;-)

It has been months since I wrote any functional code in earnest. Most recently I have been busy coding in Objective-C, and I've a long history with Java, C++ and C, with the latter language being used for many years in conjunction with a very elegant home-brew object-like framework.

As something of a student of language technology: principles and practice, I usually follow all the news and buzz to be found spread around the infowebs, in such places as Lambda the Ultimate, various blogs and language mailing lists (such as Ruby-core).

Just today, I had reason to fire up the CAL tools again in order to knock up some prototypes. The overall experience was surprising to me, in terms of the effect that producing a program in a (lazy) functional language actually had on my psyche. Having given functional programming a rest for the several months, I was almost "accosted" by the flavours of functional programming afresh. Being one with a reasonable grasp of the concepts, I naturally did not have the learning curve that one would experience from scratch (i.e. this is not an ab initio experience), yet the hiatus was sufficient to being the differences between functional and procedural/OO programming into sharper relief, and evidently to tickle my brain anew in areas that I had forgotten were differentiated for the kind of cognition that you do when crafting a functional solution.

I was using the combination of CAL's textual and graphical development tools (the Eclipse plug-in and the Gem Cutter to be precise). I have erstwhile found that the combination of these two can be very potent, especially in the exploratory stages of development - when you are surveying the problem domain and starting to form hypotheses about the structure of a solution.

Once I had completed the tasks at hand, I sat back, and was aware of the 'buzz' that I was feeling. This is the buzz you get from general problem solving success, presumably a literal reaction to endorphins released (I assure you there were no other overt substances involved, unless you could my normal coffee intake that morning!). Thinking about why I was feeling so chipper, I surmised that it was for two reasons:
1. A strong sense of puzzle solving.
2. A sense of the elegance of the emergent solution.

On the first point, it occurred to me that the whole exercise of assembling a functional solution is, for the most part (that is, ignoring some of the operational issues), one of dealing directly with fitting shapes together. This is of course an analogy, but what is going on in my head feels like the assemblage of shapes - fitted together perfectly to create the required abstracts. Of course, this is nothing more than a whiff of mathematics in general - the perception of structures and order, and formal assemblies of such things. I think the whole perception of 'shape' is made even more tangible by the ingredient of having the visual development environment alongside the syntactic one - two kinds of symbols to aid the perception of abstracts. In CAL, the visual environment is also much more interactive with a lot of information-rich feedback (rollover type inferencing on any argument/output, popup lists of proposed functional compositions etc.). I suppose that this sense of "assemblage" is at least partly responsible for the strong sense of puzzle solving. One experiences similar sentiments having designed a nice OO pattern/architecture, but not in the same way.

To the second point, concerning elegance, this is something that is strongly related to the way symbols have such a direct relationship to the semantics of functional programs. Any part of a CAL program that actually compiles, makes a formal and precise (though not necessarily narrow) contract with the rest of the world as to the quantum of processing it contributes. Part of the elegance comes from the richness of the type system and the sets of types of values that can be described as forming this contract. Another part, however, comes from the fact that the function (as a package of processing) is a highly flexible unit. The contract that functions make in a lazy functional language concern the relationships between the input and output values, but these relationships can be composed into larger structures in any way that satisfies the strictures of the type system. Elegance, though is merely a perceived quality, what is more important is how the manifestation of elegance is related to practical effects on the economics of software development.

At a low level, this manifests as a beautiful ability to build abstracts through binding arguments in any order as larger logical structures are created. In other words, the way in which you abstract is fluid, and the little semantic package of the single function can be utilised in so many different ways, compared to the strict notion of a 'call' in procedural languages.

At a high level, this behaviour results in the functional packages being able to be combined (and importantly, reused) very powerful ways, with a direct bearing on the way the intended semantics can be conjured, but always under the watchful eye of the type system - that is able to act in both a confirming and informational way. The latter can feel like epiphany. Many times have I been focussed on a specific problem, and composing functions for that purpose, only to have the compiler's inferred type tell me that I've produced something much more general and powerful than I had thought (sometimes it even tells me that I've been stupid in overlooking that function already in the library that does exactly what I'm trying to do again!).

Today's fun with CAL had all these facets. The qualia of functional programming is quite different to that of OO programming and in many ways you are much more constrained than the latter. Good OO design is certainly critical to creating correct, efficient and maintainable software, and while there is therefore a real spectrum of 'better' and 'worse' designs/implementations for a given problem, much of the structure of an OO program itself is informal to the implementing language and lives in the minds of the developers who created it (and perhaps in their technical documentation). The reasons why certain encapsulations were chosen over others, and why certain code paths/sequences are established are undoubtedly built on reason, but they become major artifacts of the particular solution. In the functional world, things are both more sophisticated and more simple at the same time (naturally, 'complexity' has to go somewhere). Functions are not the atomically executed entities of the procedural world, and their algebraic composition is a very powerful method of abstraction, as described earlier. The type system is much more pervasive and complete, which is a double-edged sword: it forces you to think about the realities of a problem/solution much earlier (which feels like constraint), but it also enables the aforementioned much more meaningful 'conversation' with the compiler. The up-front requirement to conform to formal boundaries in expressing a solution costs a little more at the beginning, but pays back handsomely downstream - both in terms of the earlier/deeper accrued understanding of the problem domain, but also the much higher likelihood that better abstractions have been selected. As any first year Comp Sci undergraduate knows, the costs of correcting bad assumptions/design later in the software lifecycle are much higher than earlier. There are still choices about encapsulation in functional languages (which modules to create, how to deliver functionality in appropriately sized quanta to allow requisite reuse and generality of components), but the packets of semantic themselves, and the manner of their abstraction is far more fluid. The denotational quality of the syntax also has value for reasoning too, but that's another kettle o' fish.

At the end of the day, any developer will get a buzz out of using a tool that allows rapid manifestation of a good solution to the problem at hand (by some definition of "good"). The qualities of the functional approach however imbue a certain concision and confidence to the construction, and with the type system, really appear to have a pseudo-physical quality of "inserting the round peg into the round hole". So it is (I think) that when you stand back from the 'model' you have just assembled, there is a much more tangible quality to the functional construction - that it has been assembled from shapes, and that those shapes in turn had a robust definition. The whole model has been 'validated' by the type system (as an instance of a theorem prover), and you are now the proud owner of some new 'shape' with its tractable, testable, semantic, and its ability to be further glued into an even larger (and impressive?) construct, with some degrees of freedom about which vertices and edges 'stick out' to be perceived on the surface of the larger creation.

Whatever, dear reader, you may adjudge as the real reasons for my trippy experiences, I'm guessing that most developers who take the time to really understand what functional languages offer are likely to come away from the experience (and hopefully some real practice) appreciating some aspects of the foregoing. I'm not personally one of those who would use a functional language for every problem (at least the current batch, with the current libraries available, to say nothing of available developer skill sets), but I'm beyond persuaded as to the very real advantages they offer on certain problems, as a component of applications. Perhaps it is a growing appreciation of this sentiment that is driving the apparent uptake of what I'll loosely call 'functional concepts' within mainstream languages, or extensions thereof. Lambdas/blocks, closures etc. have appeared in Python, Ruby, Scala, C#/LINQ and (maybe) Java. It will be fascinating to see how these fare and evolve as embedded features of languages that are centred around long standing procedural concepts. Certainly these features allow, and even encourage, a more functional style for use where appropriate. However, the basic tenets of a language are a critical factor, and so far these languages are a far cry from the algebraically typed world, combinatorial construction and semantics of a modern lazy functional language.

Right. Back to the OO world now then...

[glow fades away]

Tuesday, February 26, 2008

Xcode 3.1 - come quickly!

As much as I've grown to really like much about Xcode and friends, unsurprisingly for a 'point zero' release, there are many annoying foibles, OK bugs, to be found lurking.

I'm quite sure that Apple's reasoning about the release of Leopard would not have extended to the quality of its developer tools - beyond fixing the known critical bugs of course, Leopard was not going to be held up by a lack of complete polish in software that only developers care about. This really shows in what must be the number of medium and lower severity issues that remain in the 3.0 tool suite. A lot of these problems are undeniably jarring, with potential costs in working around the problem (or disappointment in having to avoid a feature that would be convenient or improve productivity) - but they have clearly done a good job of removing the crashers.

Nevertheless, as has been mooted in other places, it is now four months since the release of Leopard and we have yet to have an incremental release of the Developer Tools. In all likelihood an update is imminent as a part of the upcoming iPhone development kit, so perhaps we don't have long to wait. I can only hope that the Xcode tool development group at Apple weren't so sequestered onto iPhone tooling that they haven't had the time to plough through some of the medium and low priority bugs. We'll see I guess.

While there are probably a half-dozen 'quirks' in the 3.0 tools that I have learnt to avoid, none of the tool limitations are as annoying as the issues that remain in Interface Builder (as I'm mentioned before). As I have been spending a deal of time recently knocking up UI for my application, these issues have really been getting under my skin.

The most heinous issue IMHO (!) is the lack of any real ability to re-parent view hierarchies in the IB3. Several times, I have made the mistake of building initial/prototype UI by constructing views in a top-down manner (i.e. split view, a tree control on the left, another split view on the right, then further views beneath that), only to get badly stung by the limitations in IB3 to rearrange things. Split views, for instance, are infuriating. You can easily enough create one, by first creating the left/right views, selecting them and then doing "Embed objects in... -> Split view" (I'll refrain from commenting much on how this creation methodology is odd when you are otherwise creating hierarchy top-down). The children you 'embed' might be some table controls, for example. So initially all is well, but then you realise that you didn't want the right hand table control at the top level of the right-hand content of the split view - perhaps you need another splitter, or simply to add some peer controls with the table. You would think you could do one of the following things:
- Morph the table view (I guess the top view of that cluster - its scroll view) into a new container (box, custom view, ...)
- Insert a new view 'between' the table view and the scroll view (essentially replace the right hand content of the scroller and let the existing view there become parented to this new view)
- Temporarily pop (or cut) the table view 'out' of the right pane of the scroll view, in order to drop another view in its place, before dragging the table view back onto this as the first child

Well, in the case of split views, none of the above is possible. Once a view is glued into a Split view (on either the left or right panel), it seems that it is impossible to remove. The only solution I have found is to delete the entire split view (with its descendants) and start over. After even a moderate amount of flying around Interface Builder's inspector to set up attributes/bindings etc., this is very frustrating - and often the process of setting up the views 'just right', with positioning or correct behaviours can represent a lot of iterations and remembering all the settings to recreate them again in another instance of the same views is quite tiresome. It seems that in IB3 the creation of split views is essentially atomic, and in order to ensure flexibility I have developed the habit of always putting a custom view in the left/right slots, irrespective of what I actually think I might need to build beneath these views.

Aside from this egregious case of IB3 lacking some rather critical functionality, there are other limitations with re-parenting. There are situations where you need to restructure a subtree of views, and want to preserve some of of the existing view hierarchy, but don't have the new parent to move it to yet. You can cut and paste views, perhaps parking the view hierarchy you want to preserve at the top level of the NIB while you rearrange the new environment for that part of the UI. However, more often than not, when you come to drop the UI back into the new parent, you will find that many of the settings have reset (attributes and bindings).

Despite the foregoing, the whole NeXTstep approach to UI (NIBs, frameworks) has a great deal to be admired. IB3 is also (in general) much improved over its forebear, but clearly as a "complete rewrite", it does suffer somewhat from version-1-ism. Fingers crossed, that will be a complaint that will be short-lived and we'll see marked improvements in the next point release.

Friday, February 15, 2008

Using Core Data in NSArray controllers with a parameterised fetch predicate

Today I was wondering if it was possible to 'restart' the master->detail chain of a set of controls, by using a Core Data query (fetch predicate) in a controller, but parameterising the query with a value obtained from the selection in another, 'upstream' controller.

Normally, master->detail UIs are created by simply binding a selection from a master controller to the contents of a detail controller, and selecting some sub-component of the selection object(s) as the detail Model Key Path.

You can type any valid fetch predicate into the controller's properties panel, but in order to do what I want, you would need to be able to resolve a key from the 'upstream' object. That's the bit I don't think you can straightforwardly achieve (at the moment).

There are certainly a good many ways to actually achieve what I need - including binding the detail content to a custom controller key and having the get accessor for this key derive its results from the upstream selection content (which naturally it would need to observe in order to signal a change on its own key). The thing about how bindings work in general though is that they are so convenient and powerful, often requiring NO code in the application at all, so it's tempting to look if there's some way to contrive to get the required key to test in that filter predicate somehow, without resorting to 'external code'...

Thursday, February 14, 2008

NSValueTransformers update

Well, my experiment with NSValueTransformers for NSIndexPath morphing between two NSOutlineViews was interesting.
The plan was to have a subclass of NSValueTransformer that required to outlet connections to the two trees (to be more precise, to the NSTreeControllers), and then have instances live in the NIB file, all self contained, with the two NSTreeControllers binding to each other's selections (NSArray of NSIndexPaths) via the these transformers. The transformers' job is to reexpress the selection in terms of the local tree's outline, using the set of model (represented) objects as the underlying meaning of selection.

So, this idea is designed to work in circumstances (that I have) where certain constraints apply:
1. The trees refer to the same model objects (at least the selectable ones), though arranged differently
2. The trees are either single selection, or its OK to select ALL the occurances of nodes representing a given model object (even if this potentially widens the selection when returned to the source tree)

I created a small collection of NSValueTransformer subclasses: to handle converting to/from a represented object, to handle targeting the first matching object in the destination tree, and to handle selecting ALL matching nodes in the target tree.

When I came to try this out as intended, I met a hitch (naturally!). I had intended for the relevant instances of these transformers to live in the NIB, where they could be handily connected/bound to the local objects without requiring construction in the main body of code. As such they would be nicely encapsulated as UI behaviour. However, it turns out that there is no way to easily have two instances registering two different transformer names. NSValueTransformers work by registering a named instance of themselves with a central repository, and the registered name is then used to refer to them in binding parameters. I needed to register two instances: one with tree1 and tree2 as source and destination, and one with tree2 and tree1 as source and destination (i.e. reversed). Without creating a new Interface Builder palette object that contains a design-time property for the name an instance should register, you would have to have two *classes* to express the source and destination differences required. Even then, from a log message, it seems that the transformers need to be registered before the objects that use them are even awoken from the NIB, and that either means performing the registration in the -init method, or giving up on the idea of storing instances in the NIB altogether, and registering these objects in the main code (along with code to set the two trees into the transformer instances that are created).

So, all in all the circumstances were contriving to make this so it was barely working, with the need to register a name for the transformer instances really getting in the way of achieving the goals. This is somewhat annoying, as when you actually contrived to get the transformers initialised in time (separate classes), things worked rather well! It's clear that the real design intention for NSValueTransformers are more as 'global functions' than an application can initialise and register early, and then easily reference in binding parameters.

Anyway, I decided transformers were just a little too awkward for what I wanted, and set about thinking about an alternative.
I still wanted an object I could just drop into the NIB and connect up the relevant trees for synchronised behaviour. I decided to implement what I called a "TreeIndexPathsExchange". This is an object to which NSTreeControllers could directly bind their selection state, and from which they would receive new selection state (from changes in other controllers that were also bound). This is a outline of what I created:

- First, a set of 7 (arbitrary number) IBOutlets of type (NSTreeController *) to which tree controllers could be bound in a NIB. The idea is that up to 7 controllers could be linked to synchronise selection between them. Each outlet is labelled 'treeX' where X is 0 through 6.
- A ivar holding the set of currently selected model (represented) objects. These are common across all trees and this set represents what is truly selected in common.
- Next, an implementation of synthetic properties treeX (where X is an ordinal representing the 'port' that a particular tree is bound on). This was done by overriding -valueForKey: and -setValue:forKey: in the implementation and handling all keys that conform to the pattern "treeX" where the X is allowed to go up to the top port number (currently 6). The implementation of the getter for a treeX is to use take the set of currently selected model objects and search for these in tree connected on 'port' X. The nodes that are found to represent these objects are then converted to NSIndexPaths and the array of these is returned. The setter for a treeX property obtains all the represented objects in the tree that is providing the selection, and if these are different from the set we have cached, then we update the cache, and then notify ALL the other connected 'ports' that their value has changed. This is done by looping through from 0 to 6, checking if there's an NSTreeController attached to the outlet for the port, and if so, we do a -willChangeValueForKey: and -didChangeValueForKey: on the key "treeX".

This solution is great for something that can just live in a NIB, and is quite elegant. On testing, it worked first time with one exception - when going from a tree that only had on node representing an object, to a tree that had several nodes representing an object, I noticed that no selection was being made. The debugger seemed to demonstrate that the right things were happening, but the selection in the 'target' tree was resolutely being left with nothing selected. It occurred to me that both trees were configured for single selection, and that maybe the attempt to set multiple NSIndexPaths was causing no nodes _at all_ to be selected. I artificially restricted the number of index paths to 1 (the first matching node) an lo! everything was working again. I'm not sure why NSOutlineView rejects all selection when presented with multiple paths in its single selection mode (I would perhaps have expected it to select the first node in the list, or the one 'highest' in the tree). Maybe there's a good reason, but the current behaviour was cramping the style of this facility! What was needed was a way to determine whether a tree would accept multiple selections and, if not, pick a single NSIndexPath to send it. Unfortunately, the objects bound to the exchange are NSTreeControllers, and these have absolutely no knowledge of the selection mode set on any NSOutlineViews that might be bound to them. In order to allow the exchange to be able to make judgements such as this (whether to send a single, or multiple selections through), I added more IBOutlets, one for each treeX outlet, called viewX (X being 0-6). These are type (NSOutlineView *) and allow an appropriate outline control to be optionally connected for a port X, in complement to the required NSTreeController. In the case of finding a view connected, the exchange can query the outline view as it compiles a new selection to send to its NSTreeController, and can elect to send only one NSIndexPath if the control only supports single selection.

So, today ended with a nice behaviour object to put in my NIB(s) to synchronise outline views (in the scenarios I'm currently interested in). The result is, IMHO, very 'Cocoa-y' in that application code is decluttered of this UI behaviour detail, which is all nicely encapsulated in the relevant NIB. In any case, I got a buzz out of it :-)

Tuesday, February 12, 2008

NSOutlineView mysteries

I gather from various mailing list histories that NSOutlineViews in Leopard are considerably improved over times of yore (Tiger)... and as a side note, it seems Leopard is a really great time to be learning Cocoa - not just because of Objective-C 2.0, but the considerable number of improvements in Cocoa. Many little holes have been filled and conveniences added, to say nothing of the big new features.

For me today it was time to get to grips with tree controls, i.e. NSOutlineView. Having relatively easily hooked up my Core Data model to one NSOutlineView, I proceeded to add yet another (in a tab view) to show the same model in a different way. The way that NSTreeController bindings work is really nice, in that you can have several 'protocols' (sort of 'key schemas') stamped on the model objects that allow descent through the model hierarchically in different ways.

Custom cells went well too, and I like the new SourceView style that you can turn on to get the similar appearance to iTunes, Finder etc. for those 'left pane' hierarchy navigators.

I then came to trying to hook up the autosave feature of NSOutlineView. In this case, autosave remembers a user's outline state (expanded/collapsed nodes) and can restore the users state next time they launch the application. This is one of those great framework features, providing wonderful user experience almost for free. Well, it would be if I could figure out the two methods I have to write to support it!

Once you provide a key to save the state under in the user's application preferences, you have two symmetrical methods to write. One of these is sent a tree 'item' and you are supposed to return an object 'suitable for archiving', and the other does the reverse - i.e. supposedly maps an archive object to a tree item.

Now, the documentation is a little vague, so I consulted Google for some mailing list hits. The one decent write-up I found suggested that NSKeyedArchiver and its reciprocal unarchiver could simply be applied to the item to convert in each direction. That seemed like a reasonable suggestion to me, so I tried it. Unfortunately, it doesn't work - at least not in Leopard.

Leopard has introduced a 'public' proxy object for tree nodes that NSTreeController creates around the model objects it finds and introduces into the NSOutlineView. Now, I understand that there was always some kind of 'opaque' proxy object before Leopard, but if the post is to be believed, then this object was able to archive and unarchive itself (i.e. it must have conformed to the NSCoding protocol). Nowadays, at least, this isn't the case, and the requisite error messages appear on the console.

So, if you cannot directly transform a tree item into something that can be inserted into the User Preferences, what should you be doing? In fact, the biggest question isn't so much what you could create as an archive object, but rather what exactly an unarchived object is supposed to be in relation to the 'new' outline that one will find when depersisting. Am I supposed to search for the exact tree item matching some key I wrote out? Is it sufficient to build a new NSTreeItem and have it match a different instance already in the tree - using associative semantics? The docs don't elucidate, and I have more experimenting to do tomorrow.

While thinking about this problem, it occurred to me that if I had to actually go find the matching NSTreeNode in the current outline content that matched some saved properties, then I'd need to be able to search or filter the existing nodes. However, I could find no API that would do this. Not on NSOutlineView, NSTreeController, NSTableView or NSObjectController (the latter being the two pertinent superclasses). This came as a bit of a shock, suggesting as it does that you are not supposed to ever need to do this, and perhaps reinforcing the notion that one shouldn't have to search the current tree to find matching instances of some persisted items in order to reset their expanded state.

At that time I decided to park the autosave problem and move on to another issue - that of synchonising the selection between my two outlines in the two tab views. The problem here is that, for any selection, the index paths describing the selected items will be different - because the outline schemes are different - even though the selectable items in the two trees are the same. Whatever eventing scheme is used to detect a change in selection (more about this in a moment), the selection from one tree needs to be transformed into the other outlines scheme. Because the relationship between them happens not to be a simple function, this effectively means being able to look up the selected object from one outline in the other tree. Oops, we're back here again.

This time I dove right in and implemented a fairly vanilla tree search routine and packaged it as an NSTreeController category. All the time suspicious that I was doing something unnecessary and for which there *must* be a more elegant solution, it is nonetheless true that Leopard's addition of NSTreeNode makes it easy enough to search through items that are already in the tree. I'm not exactly sure how NSTreeNode works in conjunction with the parsimonious fetching of model objects by NSTreeController, though it possibly faults the requesting of additional items from NSTreeController if you ask for child nodes it hasn't got yet. If this is how it works it could be a bad thing in some kinds of search, but in any case I achieved a working search (based on the represented model object of the tree item).

With this ability to find equivalent nodes in another tree, I was able to register Key Value Observers on the selected objects of each tree and carefully update the other tree (if a different node was selected - so as to avoid infinite loops). Job done.

Looking at the code though, I'm a little dissatisfied (!). Isn't binding supposed to allow you to connect objects together directly in an (almost) code free way. As I understand it, it is only the asymmetrical selection that is preventing this in this case. What if this could be overcome in the binding itself? Well, bindings support a concept called value transformers, and this sounds like just the medicine to convert between two selection index paths in the two trees. Moreover, my understanding is that KVB handles cyclicity automatically, so it is even more appealing to be synchonising two views in this way.

So, tomorrow I will begin with an experiment to create a subclass of NSValueTransformer. This will have an instance that lives in the NIB file itself, with outlets to connect to the two trees. It will register itself as an active transfomer on -awakeFromNib, and be referenced in selection bindings between the two outline views. If that works, it will be one of those sweet Cocoa moments - and I'll just have the autosave (de)persistence left to grok :-)

Ah... Bindings and Core Data

As kind of a balance to that last post, I have to say that the last few days exploits with bindings, with some of it intersecting with Core Data, have been a complete joy.

This is, of course, where the dynamic underpinnings of Cocoa/Obj-C really come into their own, and some of the elegant results are almost enough to induce weeping.

As a Cocoa n00b, I'm just beginning to appreciate the zen of Cocoa, but the recurring pattern is quite interesting:
1. Set out to achieve X, decide to investigate the 'right' Cocoa way of doing this
2. Quickly find promising area, excitement rises as I glimpse established patterns and 'depth' of the frameworks to solve the problem
3. Read guides, references - formulate specific vision of how the implementation will be. More excitement/anticipation...
4. Write-test-ARGGH!
5. MOMENTS OF FRUSTRATION (e.g. bindings wrong, nil being messaged somewhere, other unexpected details emerge)
6. *Enlightenment*
7. Joy. Usually bags more respect for NeXT and Apple (in terms of the general patterns of the solution). Occasionally moderated by remaining questions as to why a small feature or convenience is missing.

Like learning any significant library/framework, there's an uphill struggle, but the corpus of successes (and hence working samples that are really meaningful through personalised exercises) accrete rapidly as the general philosophies of the library start to take hold and judgement/first-guesses improve. From the number of guides I've worked through so far, I feel like I'm half way to a good broad-base of coverage of Cocoa - though there's detail everywhere that will remain to discovered on an as-needed basis.

The last time I got stuck into anything this large was probably the Java foundation libraries. Even then, I've been living with Java since 1.0, so have had a chance to accrete knowledge as the libraries have matured. The single biggest difference that stands out (and makes Cocoa much more challenging for the learner in many ways) is the dynamic nature of the language, with all the ensuing mistakes that are bound to happen as you are learning. Coupled with the somewhat simpler debugger (compared with something like the Eclipse tooling these days), and you are left using a fair bit of tracing code to test code paths, print values and learn about sequences of events in the complex library. Nevertheless, the aforementioned moments of 'respect' for the designers of the language and libraries provide ample payback for some of the frustrating moments.

Interface Scrambler 3.0

... OK that's a tad unfair, but Interface Builder 3.0 is driving me crazy recently.
In general, the workflow and UI of V3.0 is significantly better than the prior incarnation (IMHO), but unfortunately 3.0 seems unfinished and rather buggy.

One of the biggest functional problems with IB3 IMO is the lack of a decent capability to reparent views - expect in simple cases. In the simple cases, you click and hold on a view in the spatial/layout designer. This "pops out" the selected view hierarchy, and you can then drag it to a new parent view. So far so good.

Any real interface however is likely to have containers like NSSplitViews and NSTabViews, and these seem to have special (arguably broken) editing rules.

For instance, I had created some prototype UI in a NIB a few days back, and had added an NSSplitView using the "Embed Objects In..." functionality, whereby you select two views, and IB3 adds the new container (split view) and putting each selected view as the child view each side of the split bar. So, creating these initially is no problem (at least in this bottom-up way).

I go on to make a considerable investment in time adding outlets, making connections, setting properties etc.
Then, when the prototype UI is working well enough, I realise I want to beef it up, to fill out the UI with other controls each side of the splitter. So, what I want to do is insert a new layer of views each side of the splitter, which will parent my existing UI and allow me to add adjacent views to that which already exists.

So, my first thought is "Embed Objects In..." again, selecting the current view snapped into one side of the splitter - and perhaps inserting a "Box" between the splitter and this view. But... no, when I select the view on one side of the splitter and go to the menu, it's all greyed out - no dice.

So, it occurs to me that my only option is to move the old split pane 'subview' out of the splitter, then add in the new 'box', then put the old view back onto this, as its new superview. OK, well, the way to reparent views, according to the IB3 manual, is to 'pop out' the child views as described earlier. However, it turns out that this doesn't work with split views. Apparently, they don't want to have an empty pane, so IB3 doesn't let you do this.

Suffice it to say, that I had to rebuild the split view from scratch, with a new custom view or box 'layer' in each side. Furthermore, copying and pasting view hierarchies appears to result in much of the set properties in the hierarchy being reset. The view hierarchy itself is preserved, but bindings and attributes are all lost - a considerable cost in time and frustration to recreate exactly.

I would dearly love to be able to reparent views in the NIB outline window rather than the layout window, but there's no support for this at all.

Other sundry issues I've experienced:
- Setting a new binding on a view deselects the view you're working with and effectively defocuses the inspector you're working in.
- If you set names on some views (Scroll views and Outline views I tried today) they are not saved (or if they are, then they're reset when the NIB is reloaded).
- Occasionally I have had other properties reset (often bindings)
- IB3 can get into a state where it refuses to 'see' a view in the NIB file overview (hierarchy view) and/or the inspector - whereas the view is clearly 'there' in the layout window.

I've wondered if some of this is down to the flavour of NIBs I've been using (Leopard only, but 2.0 format). Unfortunately, there hasn't been enough time in the day to try various configurations - and I posit that IB should be that broken with any format it supports anyway.

All in all, IB 3.0 seems to have escaped a little too early. I guess Apple can be forgiven for this in the sense that it was surely better for it not to have to hold up the release of Leopard. It's also effectively a "version 1", being a complete rewrite of Interface Builder - the first rewrite of this venerable tool in many years, by all accounts. Still, as I see it, it's currently the weakest part of the development tool suite, and I really hope that we'll see a rev. of the development tools really soon, with the attendant bug fixes (at least), if not some improved functionality for reparenting views and inserting view between existing parent-child views relationships.

Possibly the next release of the dev tools will be when Apple release the iPhone tooling - and naturally I hope the dev tools group has had at least some cycles to improve the core tools, even if they've had to be involved in supporting a lot of iPhone development features too. Otherwise I'll have to be even more patient to see some of the bugs I've raised on these issues get fixed - and unfortunately I can see a lot of IB3 usage coming up in the next little while on my project :-(

Friday, February 8, 2008

Core Data and uniqueness

Being such a newcomer to the many and varied wonders of Cocoa, I'm perennially accosted with that feeling that I *must* be missing something when I'm challenged by some apparent missing feature. So far, about half the time, my continued quest to learn how to achieve something will be rewarded with some new epiphany - a new pattern, the discovery of some previously arcane knowledge. The other half of the time I cave in and achieve what I'm trying to accomplish with some belt and braces - still wondering whether whether I'll kick myself at some future point for having missing the provision of some elegant solution.

So it is that I've recently been wondering about uniqueness in Core Data.

Now Apple are quite clear about the nature of Core Data (at least as it stands today). Despite the entity relationship models, the prepared parameterised fetch requests (queries) and the ability to use a SQL database for storage, Core Data is not a general purpose database. It is designed, of course, to provide an elegant way to persist your application's internal state in a way that is natural and requires the minimal amount of overhead (notwithstanding the need to conform to its design patterns).

I've been very impressed with Core Data (so far). I'm lucky enough to be beginning my Cocoa career with Leopard, and like a lot of the Mac OS X frameworks, Core Data in Leopard has clearly matured very nicely into a highly capable and general facility. At this time, my main data model spans a half dozen pages and uses a good many of the features available (inheritance, to-one/to-many relationships, delete rules and a little validation with more to come). For most of what my app does, modelling its data this way is clearly superior. However, I have been surprised by a couple of things that seem like omissions, but per the foregoing, leave me wondering whether I'm missing the 'right' way to approach the problem.

One such item is a need to store singleton instances of some entity. My application has global state that should be persisted, which is not necessarily per-user (though naturally I have some of those too). It would be nice to be able to create an entity to represent a unique object that will record this state - and declare to Core Data that there can be only one instance of this (at a given version of the model). Yet, I know of no way to achieve this. Of course, one can live without this formal uniqueness, and instead date-stamp an instance, and have the application 'housekeep' any excessive number of instances (perhaps clearing away all but the last written instance), but...

There's a rather more fundamental kind of uniqueness in data of course, that of unique keys - and again Core Data has no way that I know of to express that an attribute will contain a unique key. In Core Data, all the combinations (tuples) of matching data will be returned on a query, and one imagines that there can be no opimisations in how an underlying data base is queried when such fundamental metadata is missing. I got a little excited when I first saw the "Indexed" check box on what happened to be a String attribute in the model designer, but looking up what this did revealed nothing more than the vague "Indicates that an attribute may be useful for searching".

Even if Core Data itself has no formal way of indicating uniqueness or key values, you certainly need to be able to determine this from time to time in your application. For example, if your model records "Customer" in various places, you are likely to have the same actual Customer represented by multiple instances of the Customer entity (one perhaps attached to a 'recent calls' part of the model, and one attached to 'sales'). Now, because you cannot formally uniquify the details of a particular Customer (with a key like 'customer number'), if you were to query for all the Customers in your data, you will end up with an Array of Customers with 'clones'. So, how do you turn an array of objects into an, er... set of unique objects (by some definition of unique). Of course, normally a set collection does this quite handily, through the expedient of defining appropriate hash and isEqual methods on the objects to ascribe identity. Cocoa certainly has such a thing in NSSet/NSMutableSet. So recovering uniques, even if the data framework can't, should be a walk in the park, right?

Well, in one of those unsatisfying moments I alluded to earlier, you soon stumble over what looks like a major flaw in Core Data. Core Data manages objects that derive from NSManagedObject, and the documentation clearly states that -hash and -isEqual: are reserved for Core Data's use (i.e. you cannot override these methods as you can, and often do, override them as an NSObject subclass). Oops. Try as I might, I have not yet found any canonical way to reasonably filter out uniques from an array/set of NSManagedObjects (whether obtained from a to-many relationship, a fetch request or any other way). The most obvious solution is barred, given the reliance of NSSet on the out-of-bounds -hash and -isEqual:, which left me scrabbling to think of how you are _supposed_ to achieve these ends.

My reading and thinking led me to realise that without the emergence of some new arcane method, I was going to have to invent. A number of really ugly approaches came to mind, but mostly they seemed horribly expensive (lots of shuffling of objects, constructing wrappers, whatever). What I really wanted was a more flexible NSSet. That got me to find the somewhat undocumented (at least in the current Cocoa guides) NSHashTable and NSMapTable. These seemed to be offered as lower-level forms of NSSet and NSDictionary, and the ability to handle non-object keys and values was vaunted (though in actual fact, when you consult the supported configuration options on these Cocoa classes, objects are about all that is "guaranteed" to work!). It seems that the motivation for adding these collections to the Cocoa level was mostly to provide for weakly referenced keys and/or objects, in the presence of the new GC. However, clicking about in the class docs led me first to the NSPointerFunctionsOptions when initialising the collection, and then to a curious method -pointerFunctions. The latter returns an object of type NSPointerFunctions, and there right in front of me was the documentation for a couple of the properties of this object: hashFunction and isEqualFunction. Bingo! Perhaps I could concoct a set-like collection that used custom methods for identity - rather than fixed -hash/-isEquals: and therefore get around the limitations of NSManagedObject?

Experimenting with NSHashTable and pointerFunctions was frustrating - and I still haven't successfully managed to get this to work. The NSPointerFunctions returned from a freshly created NSHashTable have writeable properties for the functions I needed to provide, and the prototypes of these functions are documented enough, but try as I might, my provided functions were never called when adding objects. Are NSPointerFunctions only good for reading in this release (despite the writeable properties)? I have no idea.

However, the research into the pointer functions served as a segue into the murky world of the underlying Core Foundation implementation of CF(Mutable)Set. I've taken hardly any time to bother looking at the CF stuff - mostly because Cocoa itself is so complete, but also because it seems like a strange non-OO world where one doesn't go unless out of desperation...

As it happens, CFMutableSet is what Apple calls "toll-free bridged" to NSMutableSet, meaning that the same address can be used via either C-style pointers or Cocoa (id) pointers and with either the appropriate C functions or messages. This is very nice, but what was important was that CFSetCreateMutable (the C 'constructor' for the mutable set) takes a CFSetCallbacks structure, which is the analogue of the NSPointerFunctions. Creating appropriate 'alternate' -hash and -isEqual: functions was straightforward, and moments later... success! The CF version of the mutable set allows these alternate 'call backs' to be set up at construction time, and these are correctly called when objects were added (by sending the -addObject: message to the returned address cast to an 'id'!

Once I had had this mini-breakthrough, I was confident enough to create a 'new kind' of set collection back in Cocoa-land that constructed itself with CFSetCreateMutable, encapsulating the appropriate 'callbacks'. This collection expects to work with a category of objects that conform to an 'alternate identity' protocol, which requires the implementation of -altHash and -isAltEqual:. Furthermore, I now have a subclass of NSManagedObject that adopts this protocol and is the common super class for all my model entities, allowing me to create (albeit temporary) sets of unique instances derived from Core Data - according to the 'alternate identity functions' that they encode.

So I'm happy... but as I mentioned at the beginning of this piece, I still have that nagging doubt...

Thursday, January 31, 2008

Another discovery... more head scratching

I bumped into NSIndexSet (and its buddy NSMutableIndexSet) today.

It was one of those occasions where I hadn't been particularly aware of a Cocoa class (because it hadn't featured in the guides) until I tripped over it in using another class. In this case I wanted to delete a range of objects from an NSMutableArray, and therefore had need cause to research -removeObjectsAtIndexes:, and lo, there was NSIndexSet.

When I went to read the documentation on NSIndexSet, I became excited that this class was going to nicely fulfill some a requirement that's arisen for a collection of ranges. Indeed, the documentation on NSIndexSet claims that it is implemented efficiently as a set of ranges, and the API makes the right sort of noises.

On closer inspection however, it falls a little short of my needs (at least without work), and I'm left wondering if it was implemented 'just enough' for some pressing needs at Apple, but then left to wither on the vine a little. For instance, I would have liked to have been able to use fast enumeration on the indexes that are set, or better still, obtain objects representing contiguous ranges of indexes in the set. The only way to really query an NSIndexSet to get its contents (as opposed to discrete tests for particular indices), is using -getIndexes:maxCount:inIndexRange. Ouch.

Anyway, NS(Mutable)IndexSet does what you expect it to do when deployed to manage common array index tracking tasks. So, that's fine. It could just so easily have even more utility if adorned with a few more methods.

Wednesday, January 30, 2008

Idle curiosity concerning empty immutable collections

Mutability is an important fundamental that is often ignored by software engineers, and much better software design is often forthcoming when immutability is enforced properly.

Cocoa, very commendably, makes a hard distinction between mutable and immutable objects: e.g. NSString, NSArray, NSDictionary etc. , and one of the things I'd expect is for the empty instances of these classes to be singletons. Indeed NSNull returns a singleton, and in a quick test [NSString string] appears to return the same instance (though this may just generally due to interning/caching: a specific feature of strings). So, I was surprised to find that [NSArray array] yields new instances of empty arrays every time.

This is really no big deal of course, and I'm sure the actual benefits of singleton empty immutable objects is vanishingly small in time or space in any real application, but I was surprised nonetheless to find that the design that led to such a good differentiation between mutable and immutable classes did not go on to include singleton empty instances for these classes.

It's possible there's some historical reason for this choice, or equally likely that it was no 'choice' at all - and the truth is merely that nobody cared enough to do this.

Anyway... back to real life...

Saturday, January 26, 2008

Map and filter with NSArray

Having spent a good deal of my recent professional life getting seriously into (lazy) functional languages, and particularly the implementation of a rather nice (IMHO!) such language for the JVM called CAL, I got to toying with an implementation of map and filter over NSArrays.

OK, this must have been done a gazillion times, but I fancied the mini-challenge anyway (evidently too much hands on my time).

I had no intention of a lazy implementation (too bad), but rather, strict implementations along the lines of Smalltalk/Ruby/... collect and inject methods.

Being a dynamic language, Objective-C can easily handle the dispatching of messages to an object for each element of an array, and I decided to implement an NSArray category to add the requisite collect and inject methods. In Objective-C, to refer dynamically to a runtime method we can use a selector (type SEL). These can be constructed as literals by the compiler with the @selector pseudo function.

Unlike a functional language where the mapping function to perform a single element transform can represent itself (by passing in a function/lambda), in Objective-C will will have to pass a selector and an object that will be the target of the selector (the receiver that will perform a transformation). Languages like Ruby have lambda functions and allow the passing of procedures (Procs). Even Smalltalk, like Ruby, has blocks that can be passed in messages, but for the time being at least (until some generous benefactor adds blocks to Objective-C), we will have to stick to object-selector as the means to reference an element transformer.

So, the most general collect method looks something like this:
- (NSArray *)collect:(SEL)selector on:(id)selObj with:(id)arg2

This asks the containing array, the receiver, to collect the result of sending objects from the array to the 'selector' on 'selObj', where this message also has a extra parameter 'arg2' (beside the array object). The array constructed from all the objects returned is what is passed back.

In practice, quite a few of the transforming methods we might want to call are on the objects in the original array themselves. For instance, if we wanted to upper case all the strings in an array, then the correct method to do this is the -uppercaseString method which all NSString instances will respond to. What this means is that we need a variant of collect that can internally target the object being processed in the source array with the selector invocation. While we're at it, we can add variants of these two methods so far that don't take an extra argument (in case we don't need to provide context to the transformer). That gives 4 variants of collect which would seem to handle all transforming cases, so long as you don't mind bundling all contextual information into the one and only context object that you can send in to the transformer from 'outside', and so long as the transformer you need accepts and returns objects (type id).

Flushed with success, I turned my attention to inject. This method needs to present a selector with each object in turn from the array, together with an 'accumulator' value that is threaded through the entire process i.e. an object that starts off in some original state, and is updated by every call to the selector (which returns the new value of the accumulator). The result of inject is the final state of the accumulator, after all array items have been processed.

The most general prototype of this method is something like this:
- (id)inject:(SEL)selector on:(id)selObj withSeed:(id)accumulator

Once again, we can specialise this in a couple of ways:
1. To use the source array object as the target of the selector
2. To treat the first element of the source array as the seed for the accumulator

Another option could be to include a context object in the design, per the collect example above. I chose not to do this at this time.

Now, the CAL language (like some other functional languages) allows concurrent evaluation of list mapping. Although the former is still lazy, I thought it would be fun to see what it would take to do the same in Objective-C/Cocoa.

The first inclination is that we want to create threads for the various mappings from elements in the source array, to elements (at the same index) in the destination (mutable of course) array. These threads perform the simple tasks of dispatching the source object to a transformer (identified by a selector/object pair as before). Parallel mapping is probably only valuable when the transforming function is relatively process intensive, but in any case, it is an opportunity to spread the cost of transforming all the elements across the available compute resources.

Looking at the options for parallelising these transformations, one soon considers Leopard's new NSOperation. This class, and NSOperationQueue implement a spaces-like pattern, where self-contained task objects are added to a space (the NSOperationQueue), which processing agents (the CPUs) take out and execute using the logic in the operation's own 'main method'. Having never played with these Cocoa classes before, I thought it would be fun to try.

Using NSOperation is easy - particularly if you want to use the basic "non-concurrent" mode (somewhat badly named IMO), which creates the execution context (specifically the thread) for you. It's a simple matter of subclassing NSOperation, creating some private state (ivars) that describe the task - a specification that can be provided when the object is constructed - and a 'main method'. The latter performs the operation using the specification encoded into the instance of the operation.

NSOperations can be started directly (by invoking their -start method), which in the case of the aforementioned "non-concurrent" mode (the behaviour of the default -start in NSOperation) will create a thread and execute -main on it. Thus, you are already provided with some benefit in the NSOperation encapsulation over creating your own NSThread and having it execute some object's method. However, we need some kind of operation controller that will start the right number of operations from an available 'pool', according to the compute resources available (number of CPUs/cores/threads-per-core). This is where NSOperationQueue comes in. Not only can this launch the requisite number of NSOperations to keep all your hardware optimally deployed, but it is a priority queue that is also aware of inter-operation dependencies if these are declared (i.e. an NSOperation can cite other NSOperations that it depends on, and that must be completed before it is allowed to run).

So, to implement a parallel collect, one might envisage a loop through the source array, creating an instance of a subclass of NSOperation (let's call it TransformOp, which is then added to an NSOperationQueue. This should work fine (and in this scenario, there is no need for any fancy prioritisation or dependency management). However, (and perhaps this was just overkill) what I preferred was a way to reduce the number of NSOperationQueues being produced from the array ahead of the those being executed (i.e. some sort of demand pump from the NSOperationQueue that would keep enough operations queued, but reduce the memory pressure of potentially creating thousands of operations for the whole array). Unfortunately, this is not (yet) a feature of NSOperationQueue. However, it occurred to me that you have everything you need to fairly easily create a blocking version of NSOperationQueue to achieve the main goal of reducing the number of operations in existence.

To cause some provider of operations to block on adding a new operation, one can subclass NSOperationQueue and override the -addOperation method. I created a "BlockingOperationQueue" class to do this. In order to do the actual blocking, this subclass maintains a two-state NSConditionLock, where the states are "open for additions" and "closed for additions". The -addOperation method gates the call to the underlying [super addOperation] by having the thread attempt to acquire the lock in the "open for additons" condition. If we're not open for additions, then the thread will block. So much for the gate, who is going to open and close it?

We need to control the state of our gate (condition lock) according to how many operations are currently queued, and some notion of what is 'too much' and 'too little' in the queue. The latter data can be introduced in the form of a 'high water mark' and 'low water mark' number of items. These become properties of the BlockingOperationQueue. A suitable default for these was chosen to be 2 times the maximum number of concurrent operations (available from the superclass) for the low water mark, and 4 times the maximum number of concurrent operations for the high water mark.

Within BlockingOperationQueue, we now need to ascertain when the queue length is changing, and to compare this with the set water marks (setting the condition of the 'gate' when these thresholds are crossed). NSOperationQueue exposes an array of operations currently in the queue, and the documentation informs us that operations is a KVC/KVO compliant key of NSOperationQueue. This means we can safely register for KVO notifications from it to detect when it is changing, which drives the opening and closing of the -addOperation gate.

In summary, the new NSOperation classes in Leopard look to be a very useful addition to the developer's kit. A throttling or demand-based interface to operation providers is a nice addition. Having a parallel map/collect method isn't appropriate for every occasion, but if the transformations are relatively compute-intensive, then can be very useful. As in languages like CAL, this is also a nice way to go about launching parallel activities in general - describing them as a list of items to be worked on, and having parallel computation launched for these items. The downsides in the case of the simple implementation described here are the restrictions in using selectors (one argument and returning an 'id'), and the lack of error handling (though NSError could be deposited as results).

Monday, January 21, 2008

Oooh, NSPredicate functions...

I was looking for a way to manage a set of unique managed objects. Essentially, I needed a heterogenous collection of values whose uniqueness is entirely denoted by their value, without naming or indexing properties. This is set semantics of course, and ordinarily one could rely on -hash and -isEquals defined properly on the objects to ensure their uniqueness in the set. However, with NSManagedObject, you are not permitted to override these methods (they are presumably used to define uniqueness by id in the managed object context). Nothing stops you from defining your own version of isEquals in your subclass however, and this can compare the single attribute payload of one of these objects, with another (using isEquals on the payload objects).

Having achieved that, one can then expect to be able to use -filteredSetUsingPredicate on the to-many relationship (manifesting as an NSMutableSet) defined in the Core Data model for the set of unique objects. However, how does one effect a call to the special equality method in the content objects?

While there's not a whole lot of documentation and examples on the subject (and actually the NSPredicate class reference AND guide are entirely silent on the matter), the NSExpression class reference mentions "Function Expressions", which I stumbled upon while investigating NSExpressions and whether you could use them as a mini expression language aside from their utility in defining predicates.

The documentation for Function Expressions seems to consist of 7 sentences and a single trivial example. However, this was enough to spike my curiosity.

The basic syntax is described as:
FUNCTION(receiver, selectorName, arguments, ...)
...and the target selector is described as having to accept zero or more id arguments and returning an id value.

Thus, my custom isEqualsPayload method could receive its test object as normal, though it would have to deviate from a normal 'isEquals' by returning a BOOL NSNumber instead of the unboxed primitive BOOL.
In the FUNCTION syntax itself (passed to the NSPredicate factory method), my target object is clearly SELF, and the argument object can be passed in the regular format style as a simple %@. I screwed up the selector a few times (too much haste, less speed, and following the simplistic example I forgot to put a colon on the selector name, as my example passes one argument). Several crashes later (yes, NSPredicate seems a little bit keen to die badly if you get this syntax wrong - but I guess that's the price to pay for both dynamic invocation AND a variadic signature) and I had my custom equality method being called - and a nice unique set/relationship. There IS a caveat attached to FUNCTION in the context of Core Data. The reference says (one of the seven sentences!): "Note that although Core Data supports evaluation of the predefined functions, it does not support the evaluation of custom predicate functions in the persistent stores (during a fetch).". I'm hoping that my case does not intersect with the unsupported case (qualified by "during a fetch"), though I'll need to test it with the SQLite persistent store to see if faulting causes a problem. Hopefully however, faulting will occur independently of the execution of this kind of predicate (on the relationship NSMutableSet), and all filtering will occur AFTER the objects have been fully fetched into the set. If not, my implementation (with a relatively small number of objects intended in these sets) might be able to live with a forced fetching of all members of the relationship.

On the subject of uniqueness in Core Data, it would be rather nice if this could be declared somehow. Gating the addition of objects with checks for uniqueness is awkward, though admittedly the semantics of uniqueness could be more complicated than a local (single attribute) test (e.g. via tuples/joins). It's interesting that Core Data does have an 'index' flag on attributes, which got my excited at one point, thinking that maybe this denoted a 'unique key' metadata on the attribute. However, the documentation simply states that this flag is for marking "attributes that might be useful for searching", which indicates a rather less formal meaning.

Friday, January 18, 2008

Travail with scrolling

Had quite the frustrating time with a seemingly simple UI concept over the last few days.

I have some UI (a view hierarchy) consisting of a scrolling single column that shows one tile at each row, but where the row can scroll left/right. These horizontal 'strips' of tiles change depending on the application state, and the number of tiles in a strip typically grows, with the right-most strip representing a 'current object' and all the tiles to the left representing 'historical objects'. What I want to do is scroll the strip to the right-most object (the current one) whenever a new tile appears in the strip.

Now, all the Cocoa binding magic works beautifully, and so does the view hierarchy. The root view is an NSScrollView containing an NSCollectionView set up to have one column and zero rows (meaning "don't care"). The views that populate this column are the 'rows', represented by an NSScrollView containing an NSCollectionView set up to have one row and zero columns. Indicating that a certain area of a particular subview should be scrolled-to by the nearest antecedent NSScollView is trivial: any view can call - scrollRectToVisible: with an area in its own coordinate space, and this message is passed up the view hierarchy to the nearest NSScrollView which attempts to comply. Rather nice. So, I went looking for the where/when I could introduce such a call on my view.

Finding such a place was a little challenging, partly because of what I was trying to do in my code, but partly also because I was looking for a message that gets sent to a view when it has been newly added to a superview and everything has settled down (i.e. the resizing of the NSCollectionView has happened, so the area being requested of the scroller actually exists in its notion of the bounds of the document view). Originally, I thought that -didMoveToSuperview would be a good message, as this might indicate that my new tile was fully integrated into the superview's notion of the world. However, it appears that this message is sent out quite early, and the superview (the NSCollectionView in this case) hasn't had a chance to resize yet.

Then, on advice from the CocoaDev mailing list, I attempted to hook into the frame sizing of the NSCollectionView itself, and use events about it resizing to a scroll to the last tile. The manner in which you have to register with the NSNotificationCenter to get information about your own frame changes seemed a little odd when I approached an implementation. At the moment (!) I don't understand why a view can't override a method to be told about frame and bounds changes, but perhaps there's a reason for having to go about this the 'long way' (maybe the NextStep/Cocoa designers thought they already had a general purpose advisory concept for all, and that should also be what is used for the 'local view'). Anyway, the number of notifications of frame change I'm getting here on an addition of a tile is huge, and that this leads me to wonder whether there's something wrong with my code (more about this in a second...). In the end, I opted for each extant tile view to register for its own frame notifications (assuming that the tile view must be positioned within the NSCollectionView at some point, by definition setting its frame's location). On notification, I test to see if this tile is the 'last tile' and if so, I request a scroll to the bounds of the tile. This seems to work except...

I noticed that the result of the scroll was that the strip was positioned one pixel (or something) too far right, so I could see a tiny portion of the tile to the left of the last one. This looked pretty awful and let me to further wonder whether I still hadn't found the correct place to put my scrolling request. Nevertheless, the scrolling was almost happening right, and I had rechecked that the requested bounds were indeed the current bounds of the tile. Mmmm.

The big question was (assuming I _was_ basically doing the scrolling at the right time): what could cause a rectangle to be incorrect by a small amount (between when I had sampled it from [myTile bounds], and how this would eventually be positioned within the coordinate space of the superview. I resorted to recheck my NIB and here's what I found was that I had some 'Autoresizes subviews' set on some of the views above the tile in the eventual view hierarchy (namely the NSCollectionView representing the strip, and the NSScrollView above this). By turning off the resize flag on the NSCollectionView (figuring tiles in an NSCollectionView are _never_ generally resized), the 'out by one error' disappeared. Flushed with success, I resorted to turning off the "Resizes subviews" flag on the NSScrollView above this, BUT THE PROBLEM REAPPEARED.

At this point, I have fathomed a working set of conditions to get the scrolling to do what I want, but I'm still bothered by a sense that I'm not sure when view hierarchies are supposed to have 'settled down' after the perturbation of having a deep subview added. I clearly need to know when such a point has been attained, and its safe to request a scroll in a coordinate space way down the hierarchy that is highly dependent on all the transforms up the tree.

No doubt I'll work up the energy to research this further when my currently working code decides to be sensitive to some new condition of the application in the future...

Sunday, January 13, 2008

More dynamic coolness... I think

I have a requirement to force the order of some items in my UI according to a property value. These objects being sorted are CoreData NSManagedObjects and, as such, have no formal order - you have to assert this yourself. Binding to a Core Data Managed Object Context from an NSArrayController is a Thing of Beauty, but now I need to assert the order of my objects in the UI. So, naturally, I need to connect the NSArrayController Sort Descriptors binding to a sort descriptor (actually an array of NSSortDescriptors) that will cause the right ordering of its arrangedObjects.

Now, I figure:
1. That I will always want a particular ordering enforced
2. That I don't know where to construct and keep this array of NSSortDescriptors, because this is entirely a UI thing. I could create a property of a reachable object (via NSApplication) that is initialised to this array, but that's not very satisfactory.

It occurs that what I really want to do is to have an instance of the appropriate sort descriptors right there in the NIB - and of course this is exactly what one can do with the Object Inferface Builder component.

But first, I have to arrange to have an instance of an object that IS the right sort descriptor. Without any Interface Builder plugin jiggery-pokery to inject an instance with a particular value set as a property, this is best served by creating a new class that always sets up the appropriate sort descriptor for a particular key and order (e.g. descending).

Forgetting for a moment that what I needed was an array of sort descriptors, I created a subclass of NSSortDescriptor whose -init: sets up the appropriate sort specification. An instance in the NIB was then easily created by dropping in an Object component from the palette, and setting its class to my new class. A binding can easily be made to this object from the relevant NSArrayController in the same NIB.

Then I remembered: "Oops, I need an array of these things, even if it only contains the one sort descriptor object". Mmmm, should I change my class to construct an private NSArray with 'self' as the only element (given that I only wanted the one descriptor and I had made the class a subclass of NSSortDescriptor)? I could then have a property on the object that exposes the array which is what I'd bind to from the NSArrayController. Though, perhaps this shouldn't really be an NSSortDescriptor if I add this allusion of containment?

Another thought occurred. Given the wonders of 'duck typing' I could just have my subclass of NSSortDescriptor masquerade as an array, by implementing -count: and -objectAtIndex: . In which case, -count: would ALWAYS return 1, and -objectAtIndex: would return 'self' for index 0, or raise an NSRangeException otherwise.

I decided, mostly out of curiosity, to try the array masquerading - and this appears to work perfectly.

However, having achieved this, I'm left pondering about best practices. Clearly a lot of Objective-C/Cocoa's dynamic dispatch is to be embraced. Moreover, if one is to fully avail oneself of all the features (including KVO and KVB), then dynamic references via key paths is inevitable. However, there remains a question of appropriate patterns in code you write: it can certainly be misleading for an object instance that advertises itself as being of one class to suddenly know how to behave in entirely different circles. I wonder what the perceived wisdom is in the Cocoa community around such matters (having just arrived myself). Is this just a matter of diligent documentation (and isn't typing almost just a form of documentation in truly dynamic object systems?)

In this particular case, despite the compactness of having a single instance handle what it means to be an NSSortDescriptor (legitimately perhaps making it a subclass of this class) as well as what it means to be an array of exactly one of these objects, it might be considered better form to fully advertise the array qualities, even though they are not really the major concept. More importantly one could argue that even though I currently have no intention of every needing multiple NSSortDescriptors in the array (for primary, secondary, etc. sorting), I may one day need to change that constraint - whereupon it would be much better to have a class that advertises that it returns an array of sort descriptors rather than providing this almost covertly through the happenstance of implementing methods that make the object look like an array to others.

Perhaps I should spend some time looking at best practices where such conundrums have existed for decades: Smalltalk. Although in modern times, I suspect such problems can be better dealt with by the overt documentary power of traits, there must have been many occasions where, like this occasion, there was no protocol to announce "this object has array-like behaviour" despite inheriting from class X. It would be interesting to analyse what choices were made in these situations and which (with hindsight) were considered malodorous!

The many colours of Cocoa drawing

Having used a whole range of drawing libraries over the years, beginning with Digital Research's GEM VDI, through various versions of the Windows GDI, to Java Graphics and later Java2D (Graphics2D), I can attest to the fact that Cocoa's drawing is modern and powerful. Yet at the same time it has a number of surprises (which perhaps derive, as the Cocoa Drawing Guide puts it, from its legacy in NextStep).

I have no experience at all (yet) of directly driving what purports to be the more modern and powerful drawing library on OS X: Quartz 2D.

On first encountering Cocoa drawing (the usual experiments to orientate oneself), one of the things that immediately struck me was the way in which graphical objects express themselves directly into the drawing context. For instance, instead of something like:
[myGraphicsContext setColor:[NSColor blackColor]];
one writes instead:
[[NSColor blackColor] set];

Similarly, true graphical objects like images and paths can simply be told directly to draw themselves into the current graphics context. In Cocoa the current context lives sort of 'in the ether', you do not get passed a context in your drawing method (or obtain/create one ordinarily), and you can issue drawing commands at any time. To target a specific drawable context (such as an image), you simply send the object that contains the target for drawing a 'lockFocus' message. At this point, all drawing done with any graphical object will go to that context until 'unlockFocus' is sent to the same target.

This is quite an interesting inversion to my previous experiences. Indeed, so was the fact that classes such as NSString have an intrinsic ability to draw themselves.

Another oddity is that Cocoa has a set of objects that do know how to draw themselves (paths, strings, images etc.), but some of the more convenient shapes (like rectangles, ellipses etc.) are not available directly. Instead, for convenient drawing of rectangles one uses a family of NS* (e.g. NSRectFill) functions. Drawing rectangles is arguably one of the most common drawing operations, and it's interesting that rectangles are treated so differently (presumably because they are simple and historically NextStep maintained them as simple structs rather than as true objects).

Recently, I had a need to create a hatch pattern: a sort of barber-pole that I could draw over other graphics as a visual cue. As a Cocoa n00b, I thought "OK, I bet there's a nice pattern brush I can set that will be used during fill operations". Hitting the Cocoa Drawing Guide though, I was surprised to find that things aren't so simple. Well... they are and they're not.

It turns out that OS X drawing doesn't support brushes as such. Reading the various porting guides (for Windows developers and Quickdraw developers) pointed to a feature of Quartz 2D, where you can create handlers for drawing 'patterns' programmatically. You simply register a call back, and your routine will be called to fill in pixels in a 'cell' - part of a tiled pattern being drawn. However, further investigation revealed that this feature is not exposed through Cocoa. Instead, Cocoa has a curious (to me) option of creating a patterned NSColor from an image. I suppose it's not that weird when you consider that pattern brushes are just as 'magical' in their effect as a sort of 'magic ink' concept. Anyway,
this colour can then be set into the graphics context as usual with the -set method, and further drawing operations performed. The downside of this approach, at least in my case, is that you have to start with an image, i.e. you either happen to have one lying around with the right pattern and colours/alpha, or you go to the trouble of making an image programmatically and caching it somewhere. As I only wanted a very simple monochromatic seamless texture, the ability to define an appropriate area and be called back to render a single unit/pixel would have been quite convenient. In the end, I opted to launch Photoshop rather than write the code to programmatically create an image (though you could argue that I'd have had to have written essentially the same code if Quartz 2D's programmatic patterns had been available directly in Cocoa).

Having created my pattern colour I was caught out by another n00b error (wrong assumption). I attempted to draw my hatch pattern (complete with fully transparent pixels) over my existing drawing using NSRectFill. This seemed to work great until I moved the window around and realised that the the transparent bits of the pattern were composited in such a way as to make my window transparent in those sections! I wondered whether the "pattern as a colour" thing was fully supported for arbitrary drawing with alpha, and I tried setting various compositing modes into the graphics context to see if that would make a difference - to no avail. Only later did I revisit the documentation on NSRectFill (which I thought I was familiar with!) to discover that it always uses NSCompositeCopy when drawing. Ah ha! By this time I had created code to render a rectangle into an image, and then drawn this in turn into my view, but I was most satisfied to rip all that out in favour of a simple substitute rectangle draw call: NSRectFillUsingOperation

To wind up this somewhat meandering post, I'm sure that I have many more 'ah ha' moments to experience as I continue to get to grips with Cocoa drawing. However, and despite the various asymmetries I perceive in its design, I am beginning to properly develop a _real_ familiarity and trust in how it all works. I'm finding that I need to keep the Cocoa docs pretty close to hand though - probably a little more than I found I needed to do when learning Java2D.

Wednesday, January 9, 2008

A buddy for the last bug?

There seem to be other interactions between AppleScript (Cocoa scripting bridge) and the GC. Specifically, just obtaining the SBApplication object (the first thing one typically does with the scripting bridge in order to talk to another app) appears to fail intermittently with an EXC_BAD_ACCESS when GC is enabled (I'm doing this once every several minutes and holding the reference to the SBApplication object on the stack - so old instances much get collected fairly regularly).

I guess there are still a few rough edges here and there with the GC interacting with various components - hardly surprising when you consider the sheer scale of the changes it induces and how broadly Leopard revamped OS systems, services and libraries to accommodate both GC and 64bit application architectures.

Hopefully we'll see the expected slew of improvements through the wee numbers of 10.5 increments to come (10.5.2, 10.5.3, &c.).

Monday, January 7, 2008

21st Century Errors and Apple Script - redux

OK, it appears to be Apple's bug.

Even the most trivial app exhibits the crashy behaviour (or, at best, the -1700 error about a third of the time).
The problem appears to be with GC. Turn it on (Supported or Required) and you're in Crashville, turn it off and everything is peachy.

Unfortunately (call me a wimp!) there's no way I'd consider working around this issue by converting to a retro non-GC mode for development. I'd like to claim that I've done my time with reference counted memory managers and been let off for good behaviour. In my dotage I rather like the idea of the computer doing something that computers are very good at: keeping track of all the nitty-gritty details of what bits of memory are linked to what and doing the spring cleaning all on its own :-)

Apple bug 5674625 raised, with a simple reproduction case attached (though I bet it's a dup by now - I can't believe there aren't others trying to use NSAppleScript in such a simple way).

Now... maybe I should take the opportunity to learn how to make out-of-proc calls with Cocoa's distributed objects to a little mini-server dedicated to running a single AppleScript contribution and returning the result (or I suppose I could just fork an osascript process!).

21st Century Errors and Apple Script

Playing with NSAppleScript, I'm currently befuddled by -compileAndReturnError returning the following:
{
NSAppleScriptErrorNumber = -1700;
}

Nice.

NSAppleScript.h has no error codes listed, and a search (so far) has turned nothing up.

Actually, about half the time, the particular lines of code (run on the main thread of course, as required by NSAppleScript):

NSDictionary *errors;
[script compileAndReturnError:&errors];

will actually crash the application with the stack:

#0 0x923dfd7c in getDescDataType
#1 0x923e3ad7 in aeCoerceDescInternal
#2 0x923e8075 in AECoerceDesc
#3 0x005ae150 in ComponentCoerceDesc
#4 0x00592bec in ASCompile
#5 0x9600b5bc in CallComponentFunction
#6 0x0058dae2 in AppleScriptComponent
#7 0x005a9927 in AGenericManager::HandleOSACall
#8 0x95fc5ef5 in CallComponentDispatch
#9 0x923cd513 in OSACompile
#10 0x92d84eff in -[NSAppleScript compileAndReturnError:]
...
#23 0x9577692e in NSApplicationMain
#24 0x00002b84 in main at main.m:13

Double nice.

The source with which the script has been initialised has variously been:
"current date"
"beep 2"
and
"tell current application to beep 2"

The opaqueness of the error message, not to mention the intermittent crash, suggests that this one is going to take some attrition to actually chase down. What fun. Watch this space...

Saturday, January 5, 2008

Superstrings and symmetry

Yum.

I like the way I can add local 'special powers' to framework classes such as NSString to extend them to do specific useful things in my app, in keeping with their other more general capabilities.

For example, NSString has methods to create strings from concatenation, or more complex conjoining of components through formating. These are general, naturally. My app required some specific formatting of some strings to make UIDs with variously parseable fields. The methods for this can take specific types for the components, and can perform preflight checks on the validity of these parameters before combining them into the required string image. These methods are specific TO my app, but pretty general WITHIN my app.

So, where should these methods be defined?
- They are essentially pure functions, so should be class methods
- They could be placed on some 'top level' application class as an indicator that they are global to the app
- They could be placed in some kind of 'utility' class, perhaps implying a set of such classes e.g. StringUtils, FileUtils etc.

However, the basic string construction methods are on NSString itself, so wouldn't it be nice to add some methods to this class that are only visible in my app (...Ruby, anyone?). Of course, this is exactly what Objective-C categories allow you to do.

Benefits of doing it the Category way:
1. The formatting methods can be styled exactly like the similar, more general methods on NSString - nice symmetry, and in many ways this has the benefit of being where we expect to find them (though of course you have to include the category to get these additions).
2. We save having to create a slightly naff 'utility class' just to be a home to a few ragbag methods whose only grounding concept is the string.
3. (Compared to something like Ruby)... We get all the proper type checking from the category when calling the class or instance on which the category is defined.

Having said all that, I have no idea about any canon, tradition/religion in the Objective-C or Cocoa community about when and how to use Categories - ergo, whether this is considered de rigueur. However, it seems elegant to me and I'm quite content to make up my own mind on such matters :-)

Friday, January 4, 2008

Apple Event nesting, Mail no likey

Hmm...

My continued experiments in scripting (in this case Mail) have thrown up an interesting issue.
If I get Mail to run an applescript that tells my app about the arrival of some new mail, and then attempt to read this new mail from my application in this call, all sorts of things go wonky.

The most basic failure is that my app fails to 'see' the newly arrived mail when asking Mail. It's as if Mail is not equipped to handle apple events at some times (such as when processing rules), or perhaps this is a more general problem with Apple Events on the Mail process being nested.

The thought had occurred to me while writing the scripting bridge code that it may not be possible to nest an Apple Event connection to a given process inside another connection from that same processes - though I was not able to find any documentation that explicitly told me NOT to do this.

Aside from simply not being able to query Mail as I'd expect, I've experienced a range of other nasties that may be related, including:
1. Apparent lock-ups of my app (but not every time)
2. ...inability to stop the app as a debuggee within XCode
3. ...and (after 2) a full kernel panic

The fix seems to be to avoid the nesting. Simply 'step out of the way' of the original Apple Event from Mail and let it complete before sending any messages the other way on a completely different stack (i.e. 'later').

I'm pretty sure the kernel panic was due to a combination of sitting on a break point while handling the Apple Event from the Mail process, then (possibly after some time-out), attempting to kill the debuggee while some low-level IPC code thought some resources were still being locked between the two processes.

It would seem more likely that the problem I'm experiencing would be an issue with the internal state of Mail at the time it processes mail rules, rather than a general problem nesting Apple Events between processes (which seems rather fundamental for a general IPC mechanism). As a newbie, I need to do some more research...