Saturday, October 17, 2009

OPL Patterns: Pipes & Filters, Layered Architectures, and Iterative Refinement

Pipes & filters

The version of the pipes and filters pattern for this assignment was much shorter than the last reading because there was no detailed example and no extra fluff. The original had an overly complex example that I found useless. However, some omissions that jumped out differentiating push vs pull filters and problems with error handling and global state. Also, the original made four points on why the gains from parallel processing are often an illusion. The new one didn’t raise or counter any of those concerns.

The “Forces” sections contained some practical considerations that I don’t believe were mentioned in the original. It was nice to have those considerations in a short, independent section to call them out rather than burying them in dozens of pages of text as the POSA reading did, in my opinion. Overall, I found all three more concise because they lacked some verbiage and context, but I feel they retained 90% of the useful information in 10% of the space.

Layered architectures

I think there’s too much emphasis on how layers hurt performance. While true, it doesn’t matter for most applications that don’t involve systems programming or intense computations, though the critical paths in the later type of system can have the layers removed while the rest is layered to maintain modularity.

I liked the point about keeping the API smaller on lower layers to decrease the work necessary to change the underlying implementation.

Iterative Refinement

It sounds like this is saying that you should find high-level steps that need to be done sequentially, and then inside each step, find tasks that can be performed in parallel. It sounds similar to an article that I just read in the November/December issue of IEEE Software: “Parallelizing Bzip2: A Case Study in Multicore Software Engineering.” Four pairs of students were tasked with taking Bzip2 and parallelizing it. Most teams found that simple, low-level optimizations such as parallelizing loops were insufficient for achieving substantial gains. The successful teams found that the best approach was to refactor the code into high-level modules that were independent but needed to be executed in sequence. They then optimized the code within each module (if I’m remembering the article correctly), which sounds just like this pattern.

No comments:

Post a Comment