Tuesday, September 29, 2009

Our Pattern Language

I liked the idea of crating patterns that can be used as modules to abstract the parallel programming problems, and I liked the tone of the paper. However, I don't feel that it ever got to a level of detail that would provide me with any useful information. Some of the patterns listed (backtrack, branch, and bound; dynamic programming; process control) will be jumping-off points for further reading for myself, but I need to get somewhat deeper into the patterns.

I was a fan of the way that the paper created patterns for different types of programmers rather than trying to completely abstract the parallelism from the typical developer or exposing too much detail in order to target parallel programming framework developers.

Parallelism is becoming a hot topic in computer science and software engineering today because processors are reaching the limits of how fast they can get on a single core. Increases in speed now must come from focusing on utilizing multiple processors or processor cores.

I think my personal challenges in programming parallel processors come from the fact that these applications do not execute deterministically. It's often next to impossible to duplicate timing-related bugs in a controlled development environment. If there were better tools and support from languages and patters, these bugs would be encountered less often because more robust code would typically be written & these difficult bugs would be less frequent.

Sunday, September 27, 2009

Beautiful Architecture Chapter 10: The Strength of Metacircular Virtual Machines: Jikes JVM

I found this chapter terribly boring. I don't understand how anyone who's not building a compiler or VM would glean any useful information from this chapter, which will likely never be me nor the majority of this book's readers.

The only thing that I found noteworthy was the challenging of the idea that garbage collection is slower than memory management. The author asserts that not only do managed environments perform at least as well as explicitly managed memory models, but they may even perform better. This was based not only on the fact that automatic garbage collection is an advanced area of research in computer science that has been around for many years now but also on the fact that explicit memory mangers are much more likely to have problems with memory fragmentation.

One advantage of having the JVM be self-hosting is that communication between the runtime and its applications is made easier by the fact that they both use the same language and care share data structures and communication paradigms with no need for a translation layer.

If Jikes were rewritten today, it should be able to leverage the threading model of modern operating systems rather than having to implement a "green threading" model. The author says that "The primary disadvantage is that the operating system is not aware of what the JVM is doing, which can lead to a number of performance pathologies, especially if the Java application is interacting heavily with native code (which itself may be making assumptions about the threading model)." There is no need for this disadvantage given modern hardware and software support for multithreading, multicore, and multiprocessor systems.

Saturday, September 26, 2009

Beautiful Architecture Chapter 9: JPC

Emulating an x86 in a JVM and getting speeds that are in any way decent sounds just about impossible, but this chapter walked through a lot of details on how they made it work.

I'm still not too clear on the difference between an emulator and a VM. I wish the author would have clarified that a little. I wish the author would have put the "Ultimate Flexibility" and "Ultimate Security" sections up front, because the whole time that I was reading the chapter, I was wondering what the big benefits are. Even after reading those sections, though, I'm still not clear on the emulator's advantages over a VM. One thing that they were touting as a feature was being able to snapshot the system in just about any state. However, I know that the infrastructure team at my company does something similar with VMs all the time. I'm guessing that the difference is that the VM snapshots don't capture active processes but the emulator snapshots do. Pretty cool features, though I'm still not sold on the practicality of the whole system.

The fact that this emulator would be running in the JVM's sandbox seemed like it would be a major problem because the JVM would restrict many operations that I imagine the emulator would try to perform. Some of these problems were mentioned in the Xen chapter on why hardware or software support was necessary for the hypervisors to work correctly and trap restricted operating system calls...though as I'm writing this I'm realizing that the problem there was calls failing silently without the hypervisor being able to trap them, but the emulator is going to see every single instruction. I'm still not sure how the emulator would be able to handle requests for access that the JVM prevents the emulator from carrying out.

Although I don't typically work in Java on a day to day basis, I found all of the low-level optimization techniques interesting. However, from a higher level design and architecture point of view, I didn't take away as much from this chapter. I did become much more familiar with the JVM, class loaders, etc., but that wasn't quite what I was expecting.

Thursday, September 24, 2009

The Adaptive Object-Model Architectural Style

The Adaptive Object-Model architecture sounds like a very interesting concept, but I have not have much success with it in the past.

The AOM architecture moves the definition of entities, their attributes, and their relationships into the database. The code deals with very generic types and has to know how to interpret the model found in the database. In my experience, this can lead to a very flexible data model whose flexibility is not realized at all in the code. Every system that I have ever worked on has ended up with too many special cases too be able to be successfully modeled by such a generic system. However, the system that I worked on did not try to model strategies and rule objects in the database, which was likely its primary reason for not being able to work generically with the data model.

Tuesday, September 22, 2009

Big Ball of Mud

The big ball of mud that I worked on became a terrible mess for one key reason: no one on the original development team knew anything about good design. They were a bunch of DB guys who wrote 1000 line long stored procedures from hell and 1000 line long methods that were absolutely impossible to comprehend as a whole. Over time, subsequent developers just stuck with the anti-patterns in place because consistency seemed better than having many different paradigms littered throughout the code. The complete absence of unit testing made refactoring practically impossible.

I think that it is obviously more involved to build a system with a good architecture up front than to have no architecture at all. Projects that I’ve worked on have had good success by being Agile/Lean about it and implementing just enough architecture at the last responsible moment. We typically come up with a layered architecture and an idea of what levels of abstraction & functionality will go in each and how they will be coupled. Other than that, we try to let the architecture evolve somewhat organically. Pair programming, peer code reviews, and architectural reviews all look for opportunities to refactor to patterns. As long as we keep as we keep the public interfaces as small as possible and keep unit test coverage high, this refactoring usually isn’t too difficult.

I disagree with the “Make it work, make it right, make it fast” mantra. I think that repeatedly cycling between “make it work” & “make it right” in very small increments leads to good systems. However adding “make it fast” onto the end implies to me that after you finish one round of “making it right” and before you begin “making it work” again, you should do performance optimization, which I believe should typically be delayed until it’s needed. I much prefer the TDD mantra of “red, green, refactor,” which boils down to “make it, make it work, make it right,” with the crucial assumption that the “red, green, refactor” loops are as small as possible.

Throwaway code is rarely thrown away. I detest implementing quick prototypes where deadlines are tight and design is naught because the code always makes its way into a production system and it rarely gets refactored because “it just works” and the design often prohibits adding unit tests to facilitate refactoring.

Sweeping it under the rug is never a good thing.

I am currently trying to lead an effort to reconstruct the big ball of mud that I was stuck working on for two days per week for two years. The project has taken on so much technical debt, that it takes an immense amount of time to implement the simplest features. It is terrifying to make any changes to the behemoth because there are no automated unit tests, and it is typically very difficult to figure out where the code is actually used by an end user and if the output has changed.

Monday, September 21, 2009

Beautiful Architecture Chapter 8: Guardian

The disadvantage to Guardian’s naming system was that it was overcomplicated, inconsistent, and all around sucked. There were no advantages to it.

I’ve never worked on a system that has needed much fault tolerance. However, when reading the section about the messaging system, it reminded me immensely of the good things that I’ve heard about Erlang leading to very reliable systems due to its message passing style of communication.

Guardian was terribly insecure.

Guardian’s checkpoint system seemed terrible for several reasons. It struck me as a huge flaw that the programmer had to explicitly invoke checkpoints. A decent system should hide that. If it forces you to do this manually, you think that would remove some overhead so that the system could perform well.

I thought it was very interesting that EXPAND and FOX allowed the system to scale to more processors with trivial ease, though apparently the performance gains were not quite what you would expect.

Overall, this chapter was a big waste of time. The book is named “Beautiful Architecture,” not “Grotesque Architecture.” There was very little to take away other than the fact that this OS failed because it was poorly designed. No good lessons, though.

Friday, September 18, 2009


This paper may have had some significance back in its day, but it seemed like nothing more than Common Sense 101. It didn’t add anything to what I consider the standard pattern of layering your architecture to group certain concerns and layers of abstraction together and decouple different layers.

Refactoring to this pattern will become exponentially more difficult as the project progresses. Trying to refactor a big ball of mud would be extremely difficult because of all the coupling.

I’m not quite sure of the difference between this pattern and pipes and filters other than pipes and filters seemed to be talking about having all the code at the same low-level of abstraction with quick & small filters, while this paper was talking about larger modules at different levels of abstraction. The OSI example seemed to wrap the data from each layer, while the streaming multimedia examples of pipes and filters operated on the same data over and over.

My professional experience as far as planning the architecture has matched the authors’ recommendations. It is extremely difficult to work from the bottom up and predict what low-level services you will need to provide the higher layers without knowing the services that the higher layers will provide. The yo-yo way of working from top to bottom back to top & repeating has served me well.

I have also found that having layer J depend on any layers other than J-1, whether they be above J or below J-1, is a slippery slope and degrades most of the benefits of this pattern.

Tuesday, September 15, 2009

Beautiful Architecture Chapter 7: Xen and The Beauty of Virtualization

This chapter wasn’t quite my cup of tea. I am much more interested in software architecture of the form of design patterns than this type that straddles the boundaries of software and hardware architecture. Hopefully there are still more interesting chapters ahead.

I did find it interesting how the concept of mutual distrust between the guest VMs and the hypervisor host helped improve the reliability of the software, as opposed to the mutual trust relationship necessitated by grid computing.

Allowing guest operating systems to perform privileged operations when running on top of a hypervisor is a challenge because the operating system is no longer at the most privileged level. This means that the guest OSes will not be able to successfully call many crucial low-level commands. Before Xen, when guest VMs would execute privileged operations, some would fail in a way that the hypervisor could trap the instruction, correctly execute the request, and return control to the VM. However, there were many privileged operations that would fail silently, without giving the hypervisor a chance to trap and complete the operation, thus causing the guest VM to fail. That required hypervisors had to scan the guest VMs at run-time and replace privileged calls that would fail silently to go directly to the hypervisor. For the operations that would not fail, hypervisors had adapters that presented an interface to the guest operating systems that looked exactly like the physical hardware, and then the hypervisor would translate the operation and send it to the hardware in the proper way. The way Xen handles the problem is having the guest operating systems be aware that they are running in a virtual machine so that they communicate directly with Xen when they need to execute a privileged operation. This meant that out-of-the box OSes couldn’t run on Xen. The source had to be modified to be compatible.

One of the primary concerns when designing Xen was separating policy from mechanism. In order to achieve this, a special “domain 0” is started with the hypervisor. It runs on top of the hypervisor like guest operating systems but is able to perform privileged operations not offered to guest operating systems. The policy is put into domain 0, which operates at a higher level than the hypervisor and handles many calls from the guest operating systems. The mechanism is in the thin and simple hypervisor. An example that the author provides is the initialization of a new virtual machine. Domain 0 is responsible for most of the heavy lifting involved in the setup and configuration. The hypervisor simply receives commands from domain 0 to setup a new domain and allocate some memory to the new VM.

Eventually chipset manufactures added support for hardware virtualization so that when privileged operations are executed and fail by applications even at the highest privilege levels, i.e. the VMs, the hypervisor is able to trap and intercept all of those failures. The fact that Xen is open source helped during this transition in a few ways. For one, Intel and AMD were able to contribute low-level patches to Xen so that it would work with their new hardware. Also, Xen was able to make use of other open source applications to emulate BIOSes and create virtualized hardware interfaces.

IOMMU one of the most recent forms of hardware support for virtualization. IOMMU allows shared access to hardware to be multiplexed at the hardware level, without the need for processing from Xen à la shadow page tables. The IOMMU ensures that VMs are only able to see and access addresses in hardware that belong to them.

Monday, September 14, 2009

Pipes and Filters

"Pipes and filters" is probably the architectural pattern that is the easiest to understand other than the big ball of mud. However, I feel that general population tends to see the pattern in places where it doesn't exist. I don't believe that simply dividing your application into modules qualifies as pipes and filters. I know people who think that any 3-tier web application consisting of user interface, business logic, and data access layers qualifies.

I think the following excerpt from JLSjr's blog on the topic eloquently summarizes my thoughts: "The Filter portion of the concept must be developed by a multi-faceted consideration of factors including reuse, adaptability, scalability, efficiency, and level of granularity among many others. The development of a sensible set of Filters, their interfaces, and data formats is a major portion of the effort.” It isn't as trivial as many developers seem to think to find the right level of granularity & abstraction for a filter in order to make each element in a set of filters swappable and to make each filter independent of the stages preceding and following, depending only upon the data that it receives.

I think I’ll save the rest of my thoughts for when I lead the discussion on this topic tomorrow :)

Sunday, September 13, 2009

Beautiful Architecture Chapter 6: Data Grows Up

Given the popularity of Facebook, it's ability to scale well (at least compared to peers like Twitter), and the openness and evolution of its platform, I was very interested to dive into this chapter. I'm normally in favor of chapters that dig down into the implementation details of solutions so I have something to sink my teeth into rather than just the generic, abstract concepts. This chapter reminded me of the old adage "be careful what you wish for." This chapter was much to long because it was bloated with sample code and detailed discussions of implementation strategies that added nothing to what I gleaned from the high-level overview provided. I found myself about eight pages into the chapter wondering when I'd come across the first noteworthy passage.

With that angry tirade out of the way, I was impressed at how Facebook gave external developers a suite of tools that used familiar interfaces. Those seemingly easy-to-use tools provide what I imagine to be an unprecedented level of flexible integration not only of Facebook's data into their own applications but also, and more surprisingly, of their applications into Facebook itself. I am somewhat concerned, however, that by exposing FQL and FBML, which get lexed and parsed by custom routines, they may be introducing some security vulnerabilities. SQL and HTML have had many years with many pairs of eyes scouring them for ways to fix their vulnerabilities. Even if Facebook has some of the smartest people possible working for them, I think that the incredibly high ratio of black-hat hackers to those trying to secure the platform may come back to bite them.

I think the juxtaposition of the REST chapter with this one made me a bit more skeptical of Thrift and the traditional API that Facebook used to expose its data. I also don't understand which of the "benefits" the author associated with Thrift a) would be considered beneficial and not detrimental by the author of the previous chapter and b) what Thrift provides that SOAP doesn't. Granted, my knowledge of both SOAP and Thrift are limited, but I don't think the author helped me out at all. I also found it strange that in the text leading up to the FQL section, the author said that FQL "casts platform data as fields and table rather than simple loosely defined objects in our XML schema." I thought one of the benefits of Thrift was supposed to be good handling of typing.

Saturday, September 12, 2009

Excerpts from Christopher Alexander

I am purposely skipping this reading since Professor Johnson said that we could skip a few throughout the semester without it having a negative impact on our grade.

Thursday, September 10, 2009

Beautiful Architecture Chapter 5: Resource-Oriented Architectures

This was the first reading that gave me at least as many questions to research as useful nuggets of information. My experience with SOAP web services is limited to what are basically RPCs via XML within a single application (don't ask). Given that, I haven't felt much of the pain that the author associated with SOAP, so I'm interested to do some more research and discover what the downsides are when you try to utilize SOAP in more complex ways.

One thing that confused me was the way that the author repeatedly said that REST has the benefit of allowing you to upgrade the back-end systems without breaking the clients, implying that this is not possible with SOAP. My limited knowledge of SOAP leads me to believe that you can change any implementation details that you'd like without impacting the clients because it is not too difficult to keep your XML responses looking the same. I'm interested to see the scenarios where this is not the case.

I haven't used REST before but have done quite a bit of reading on it since it's all the rage. This chapter finally tied together my loose and disconnected understanding of the pros and cons, and I feel that I walked away with a pretty solid knowledge of how to compare REST to similar forms of communications. For some reason, this was the first time that it really stuck that http://server/getemployees&type=salaried is just an RPC via URL, and you may as well use SOAP. REST uses the same location for all of its CRUD operations; the type of operation determines which action is taken.

I know from firsthand experience the benefits that can be gained from having the same data available at the same location in different formats depending on the request context. Too many times I've run into problems where what is supposed to be identical data is going to the screen, a graph, and an Excel document, but there always ends up being just a little more or a little less processing done in one format. The fact that there are different endpoints makes it easy to modify one while omitting the others. When the numbers don't match up, it can take way too long to track down the bugs.

I think the fact that REST has only four verbs is just perfect in most scenarios. As the author mentions, most scenarios in industry involve information management, and post, get, put, and delete cover all of your CRUD operations. The author does make the point that SOAP does have it's place; it's better at invoking arbitrary behavior, but I find that REST would suit my needs 99% of the time. The few times that I need something with a semantic meaning that doesn't quite fit into the REST model, I'll handle them separately. There's no need to hinder the majority of cases for the exceptions, though. The four simple verbs also don't preclude you from having as much or as little processing between the request and the data store. It's not like each GET request has to pull back a row straight from your database to be processed by the client.

As an enterprise application developer, data management is my world. You could probably reduce almost every task that I work on to being creating data data or slicing and dicing it hundreds of different ways. Every technology has contexts in which they shine and others where they do not. The job of the architect is often to find the right tool for the job at hand. REST would be such a tool in my everyday life.

I was really interested by the author's points that it is better to pass around paths to the data to different applications rather than the data itself. I never really thought of how much easier an audit would be when you don't have to trace the path of the data through a maze of connections. I also liked how he pointed out that most people overlook the fact that the context of a REST operation provides all the information necessary to fully secure it. The manipulation of the data and the context in which it is done are orthogonal.

I loved the idea of using 303 responses from a server endpoint to redirect clients to the new location. The clients won't be broken when things get moved around, but you don't have to manage any kind of complex internal mapping of resources and endpoints.

Tuesday, September 8, 2009


My first two stream-of-consciousness reactions are "why bother?" and "would any real team ever use this?"

Integrating ArchJava into the Java language does have the benefit of keeping documentation and diagrams in sync with the code, which always seems to be a problem with code comments and formal docs. However, I know there are tools out there that can look at vanilla source code and generate a visualization of your dependencies and communication structure without the need to use custom language extensions. The paper says the following: "Automated tools could have gathered some of this connectivity information from the original Java program. However, these tools would require sophisticated alias analysis to support the level of reasoning about component instances that is provided by ArchJava’s communication integrity." Either I'm missing what they're saying, or the tools that I already use have "sophisticated alias analysis" because they do a great job doing everything ArchJava does and more from an analysis standpoint.

As far as restricting communication between components, can't you achieve the exact same functionality as the "out" keyword my making sure you make all methods private except those that are intended to be used by other components? I must be missing something, because that is very basic design to me. I think their reasoning is sound, and teams would think more about communication between components when initially connecting them because it is a more burdensome process with ArchJava. However, I can easily see the in/out/port trio being abused.

I also find it hard to believe that any team outside of academia would ever want to use this. Would you want to develop an enterprise application using a non-standard compiler and (maybe) JVM? I'm not going to be the architect to approve that.

I think the paper lost some credibility with me when they proposed glaringly obvious hypotheses such as "refactoring is easier when done in small chunks" and "if the implemented architecture doesn't match the desired architecture, you'll have to refactor it if you want it to match."

I don't know what problems of the "Making Memories" project or any other project this would solve that couldn't be done easily with existing tool sets, decent team standards, and a review process. I like being able to automatically prevent and detect code smells such as the Law of Demeter violations (one of my pet peeves), but I don't like the hoops that ArchJava makes you jump through nor do I trust that the tool could work without reviews and standards that the teams should have anyway. Maybe I feel this way because the teams I work on are always less than ten developers, but I still think that there is the tooling out there to support the same process without the need to change the language that developers are already comfortable with.

Monday, September 7, 2009

Beautiful Architecture Chapter 4: Making Memories

I wish every chapter in the book were like this one. I loved that the chapter discussed abstract, high-level architectural principles and then discussed how those principles were implemented in the real world! I've read so much strategy (though I still have a long way to go before I can call myself a n00b of an architect), it's great to finally get some of the tactics.

I found the layers of the architecture from bindings through the application facade quite interesting. The need for decoupling between the domain model and the UI is something that I've learned the hard way on many projects. As Fredrik Kjolstad mentioned in his blog post on this chapter, the pattern takes a lot out of the MVC playbook, and MVC + application facade = decoupled & testable layers.

I think that I'll definitely have to re-read the sections on bindings, properties, forms, and the application facade to really comprehend what each is doing, but I like the sound of the stack on my first read.

I did get the feeling that the example about needing to capture a rewards club member's club number and expiration date bleeding into UI logic as "if this checkbox is selected, then enable these four other text fields" was going to have that logic put into the forms layer. This is definitely better than having it in the UI from the view of testability, but my gut feel is that business logic can still become scattered if some is in the forms layer and some is in the domain layer.

I also didn't comprehend the incredible power of the bindings layer that the author was so proud of, likely because I haven't had the misfortune of experiencing headaches that would have been prevented by such a layer . The forms layer seems like enough abstraction between the screens and the app facade.

I was a little surprised to read that every form has its own application facade. There seemed to be a one-to-one relationship between screens, forms, and application facades, in which case a change to your domain model doesn't feel like it would be very shielded from the rest of the app. You'd end up changing as many files as if there were no abstractions and the UI was bound to the domain model instead of the application facade. On the other hand, it seems necessary for there to be a one-to-one mapping so that the facade can return the domain model projected and flattened just the way that particular form & screen need it. What am I missing here?

The resiliency built into the application was impressive. I also liked the fact that it had a useful, perhaps unintended consequence. The fast and slow retry methods meant that a StudioServer could be taken off-line for an upgrade without all StudioClients having to stop working. As soon as the StudioServer was back up and responsive, the StudioClients would complete their transfers.

I liked how the team applied Conway's Law and created a DvdLoader for the PCS team to interact with so that their team could maintain full control of the DVD's layout.

This chapter also introduced another simple yet useful decomposition of architecture: what must it do and what boundaries must it work within?

The “4+1” View Model of Software Architecture

I definitely took away some useful things from this paper. There was no spot where I had a eureka moment or found something brilliant. Rather, I found it wonderful how the paper was able to take what are easy to understand ideas and get them into an algorithm to follow. There have been many times on my projects that a component or process in the system has seemed so obvious that explicit analysis or documentation was not required. However, more often than not, these simple things will lead to complications down the road when new people work on the component and/or as the application grows in complexity.

Another obvious idea that benefits from explicit analysis breaking architecture down to components, containers, & connectors and also, later in the paper, partitioning, grouping, & visibility. All are very simple concepts everyone understands, but if you take a moment to sketch or analyze the system from those perspectives, as well as the 4+1, it can help bring obvious design flaws to light.

The paper mentioned that the logical architecture should be used to work with business stakeholders to analyze functional requirements. However, I feel that the diagrams presented here, where the fine-grain detail only goes down to the class level, may not be enough to help verify that all functionality has been implemented. The public API of each class should be listed, or each method should have a summary phrase. That would help the discussion with business stakeholders when you need to explain what each component/class is responsible for.

I liked the "+1" idea of walking through various use case-like scenarios to check the consistency & validity of the four other views. Another obvious yet great thing to explicitly do.

I will definitely be keeping this paper in mind on future projects, though, as the paper mentions, not all projects need all 4+1 views.

Wednesday, September 2, 2009

Beautiful Architecture Chapter 3: Architecting for Scale

Jackpot! Just the type of reading I'd been yearning for from this book. Though the last chapter was quite satisfying, it presented the information in a more abstract way than the Darkstar chapter and was mostly just preaching to the choir.

I found this chapter very interesting because I found many useful similarities to problems I face as an enterprise developer, but the problem was different enough to make it more interesting than my day job.

I thought the geographic partitioning idea to be brilliantly simple. It's obvious to me that with more typical applications and data, users should connect to the server closest to them to help both decrease their latency and balance the load on the system. However, in the interactive and social world of online gaming, I found the idea of having geographical regions in the game correspond to different servers very clever.

The design of the architecture seemed to do a great job in giving the Darkstar team the utmost flexibility in reconfiguring and reimplementing the system with no impact on the clients, though their abstraction ended up being slightly leakier than they had hoped. I found it interesting that transparent to the game servers and clients, different nodes in the system could easily change which Darkstar instances were handling different events based on load, latency, or any other factor.

There were a few big shockers for me. The first thing that surprised me is that their tasks are supposed to be very short lived, with a maximum lifetime of 100ms by default. I would think that the overhead of having so many tasks needing to be distributed, loaded, and unloaded from different cores would be crippling. I was also amazed that the Darkstar team theorized that making each task and all data persistent would not impact their latency give a enough cores and a set of tasks that are easily parallelizable. Although I thought the idea was clever from a fault-tolerance view point, I didn't fully grasp why persistence was a prerequisite for parallelizabilty of the tasks as the author stated toward the end of the "Parallelism and Latency" section: "Remember that by making all of the data persistent, we are enabling the use of multiple threads..."

Beautiful Architecture Chapter 2: A Tale of Two Systems

This chapter grabbed me out of the gate by not comparing building software to building a house like the majority of similes. I loved the comparison of the design and evolution of a software system to a city. Flows of data like streets that need to be neatly laid out at first and widened or rerouted over time. The different types of buildings. Some construction projects can be done by individuals, whereas more massive projects need the involvement of an architect and construction crew to plan and implement.

Ah, the Messy Metropolis. How I wish I was unfamiliar with thee. I liked the idea that the author presented here of drawing out the modules, dependencies, and data flows to easily see at a visual level what the system had become over time. Metrics about afferent and efferent coupling don't give you the same sense of flow and grouping. I liked that with the whole mess of a system drawn out, the author was more easily able to see when a "destination" was "geographically" nearby but a roundabout path had to be taken instead.

The personality aspect of the team is also an important factor that doesn't get as much press as it deserves. As the author points out in the next section, though, the hoarding of interesting features and development of modules lacking cohesion can often be avoided by pair programming and YAGNI.

On the other hand, coupling is something that does get a lot of press, but I feel like several projects that I've worked on lately have missed the boat. As the author states in one of his notes, "Good design takes into account connection mechanisms and the number (and nature) of inter-component connections." I feel that on many teams that I've worked on, the idea of a component is at too high of a level. The code will often be organized into just a few .NET assemblies/Java packages such as UI, business logic, and data access layer. We'll often do a good job of managing coupling between those entities, but I feel that coupling should be closely analyzed as low as the class level in areas that are business-critical, complex, or very fluid. I've seen too many people thinking the were in Design Town when in fact they were barreling towards the Messy Metropolis at 100 mph.

Then we arrive at Design Town. I like how the author emphasized that upfront planning is necessary in Agile methodologies. Consider this famously misinterpreted Einstein quote: "Make everything as simple as possible, but not simpler." While most people tend to focus on the first portion as justification for doing away with any and all planning and design, it is the second portion that is key. Throwing out all up-front planning in Agile to determine some basic modules, patterns, etc., you're stacking the deck against yourself.

I also liked the author's note that "An architecture helps you to locate functionality: to add it, to modify it, or to fix it." I find that the key part of architecture that helps developers with day-to-day tasks is knowing where to insert and find methods in the system, UML sequence diagrams required.