You are currently browsing the category archive for the ‘CS 527’ category.

WordPress

Wordρress

If the first version has a capital “P”, then my blog host has decided to usurp my editorial control by default.  Wonderful.

Evans talks about a “Deep Model” when he discusses refactoring, and states:

A deep model provides a lucid expression of the primary concerns of the domain experts and their most relevant knowledge while it sloughs off the superficial aspects of the domain.

Most modeling at least starts out as a “find the nouns and verbs” game, but the key is that it shouldn’t stop there. I think this overlooked key is the primary reason why refactoring to a deep model is difficult. A developer has to be listening very carefully to domain experts in order to identify some of the subtle behaviors that may be taken for granted by the expert. Most applications are not physical simulation of their constituent nouns, so it makes perfect sense that the best model will not simply be concrete nouns and verbs, but representations of underlying relationships and behaviors.

Particularly with OO programming, many developers have a habit of viewing physical objects as model objects, overlooking the possibilities of behavioral objects. Evans includes Constraints, Processes, and Specifications (predicates) as good examples of explicit behavior. Essentially, abstracting procedural code into behavioral models does two things:

  1. It provides flexibility in replacing and augmenting behaviors, which will in turn provide flexibility for domain growth.
  2. It raises the importance of the behavior by naming it and giving it a place in architectural documentation or diagrams, where previously it would only be a few sentences describing an otherwise-anonymous process nested within API documentation.

So why does this make refactoring hard? Because it’s design. Most refactoring discussions are exclusively code-level (and machine-assistable, implied by Danny Dig and other refactoring researchers). The level of refactoring that Evans focuses on is not code “cleanliness” or any sort of mathematical graph-partitioning problem.  It is the expressiveness of the model itself, and the process of converting a Nouns-n-Verbs Model into a Deep Model.

Software developers are rarely domain experts, so the biggest barrier is knowledge sharing and communication. Without domain experts pointing out the weakness, awkardness, or inflexibility of a design, software developers are left to figure it out themselves, more by chance (lucky modelling guess) or coincidence (mechanical refactoring clarifies the model as a side-effect) than by actual knowledge (domain research).

The bulk of the text in these chapters (especially 4 and 6) revolve around separating “business logic” from … everything else. Chapter 4 discusses the concept of a Layered Architecture and how it furthers DDD. I would consider this a rather basic natural progression for growing developers. Even with completely ad-hoc development, layers will naturally coalesce:

  • Infrastructure: Most programs are based upon collections of libraries, because it reduces the effort required to get something done.
  • User Interface: This gets a bit iffy at times, because the UI code is often abstracted into part of the Infrastructure (e.g. I am writing a Swing application), and really that’s only half of the battle. It’s less natural to separate GUI API-calling code from the application, but the use of a library at all is a nudge in the right direction.

The description of Factories and Repositories seems rather extreme, but it does follow in line with unit testing. Testing a domain object’s behavior should be as separated as possible from testing the object persistance, because the persistance is really only a means to an ends. From personal experience, I also know that it’s a royal pain in the ass to set up test harnesses for a large DB schema, and the ability to separate that from all of the more meaningful (read: less infrastructural) testing is a definite boon.

In a way, I see the separations as a sort of two-dimensional cut. Layering the software provides several strata to handle, and separating domain “business logic” from “domain model persistence” slice the domain stratum into the meat and bones of a product, respectively.

Chapter 5 focused mostly on the pragmatic aspects of domain modeling. Namely, how can I apply these “pie in the sky” ideas to a real project? While it only touches lightly (addressing only a few technical issues of realizing a domain model), it is nice to see, and I hope to see more in later chapters. Without concrete grounding, no design concept can be adopted in the real world.

I’m about halfway through chapter 2 and the discussion on vocabulary has really spoken to me. Quite a bit of my accomplishments (or lack thereof) at work have been related to things Evans has suggested:

Excessive Frameworks

A nasty habit I’ve seen (out of myself and others) is the belief that a generic framework will obviate domain knowledge. I’ve written plenty of small frameworks, and I’m convinced that the only good framework is a domain-specific framework – anything else is best left as a library.

Modeling Vocabulary

I’ve also run into cases where a domain model has grown without any experts (not because no experts were available, but because the domain wasn’t entirely focused yet). Vocabulary was fickle:

  • Should a user’s computer (identified by a MAC address) be “hardware”, or “device”, or “computer”, or something else?
  • For that matter, should a user be instead “customer” or “client”?
  • And what about the customer’s personal data: “account” or “profile”? A customer has a username and password, which sounds like “account”, but what if the domain grows and a customer can have multiple usernames tied to a single payee?

The end result of a shifting domain and insufficient forethought into the model vocabulary is that some model names have drifted:

  • Between “device” and “hardware”, it made most sense to refer to a term that nontechnical users may be familiar with: “device” was chosen. While in-page text was simple to convert, some web URIs still now refer to the original term, “hardware”.
  • Between “customer” and “client”, it made sense to follow the nomenclature of the underlying billing software: “customer” was chosen. However, the product was initially developed to be as independent as possible from the billing software, so there are scattered references to Client in the source code. The most awkward is seeing a block of code that starts with cust = new Client(...)

Before I lose track of these links, here’s what I’ve been looking at for architecture diagram generation:

Ok, so I should specify right off the bat my opinions about Java. I don’t like it, and many of its shortcomings more pronounced due to my experience with Python. I don’t like forcing a coupling between object and file hierarchies. I don’t like silly typing rules casting everything under the sun to an Object. I don’t understand the hype around JDBC: I expect my language to be bindable to my RDBMS, and I know that any advanced database programming will require sacrificing portability. I don’t like hearing advocates brag about the “write once, run anywhere” mantra when I know that not all JVMs (or JNI modules) are equal, or possible. I know Python’s not perfect either, but for the sake of my argument let me use it simply as a foil:

Read the rest of this entry »

Why is software reuse so hard? My initial response to this was thinking about the human nature of pride and protectionism. Creators have a natural tendency to want to keep their creations owned by themselves. “Intellectual Property” is such a rush right now, with people screaming, “I thought of that first! You aren’t allowed to copy it, because it’s such a good idea!” The whole idea of proprietary software hinges on not sharing on a global scale.

The book describes more about software reuse within a company; namely, Software Product Lines. They describe how proactive reuse is difficult (because it’s Big Design Up Front) and typically ignored. Reactive reuse, on the other hand, is a basic point for refactoring and the whole DRY principle: if something’s repeated, separate the commonalities to reduce repetition. At some level, it’s a question of randomness and compressibility: Software is incompressible if it has no repetitions, and refactoring is a form of compression.

Read the rest of this entry »

One strength I saw in the ATAM was identifying the need for relationships between the quality attribute requirements in a project. The book explains this as a Utility Tree, saying that requirements can be classified as a hierarchy (based on a “problem being solved” link). Given that they have worked on more real projects, I would guess that they have experience with what needs to be modeled.

However, I don’t really think a simple tree is sufficient. Certain quality attributes could be diametrically opposed, and a basic graphical display of that would be useful. Maybe a tree to represent “composition of requirement classifications” and an additional (directed?) graph to represent additional architecture-driving relationships. If it could be seen that one “set” of requirements was directly associated with a number of other sets of requirements, it would make sense, then that the focal point of requirements would be the focal point of architecture.

Read the rest of this entry »

Last night I spent a few hours browsing through source code and documentation for a number of projects: Gaim, POV-Ray, Angband, and Freeside. While I didn’t see any documentation in those projects representative of the examples shown in the text, I did see one similarity in all of them. The major pieces of documentation were all well-targetted at a specific audience.

  • Gaim included mostly a FAQ useful to read before downloading the source code: What did the project do? Who should be interested in the source code, and why? What is the process or point-of-contact for getting involved in the project?
  • POV-RAY included tons of documentation directed towards the end-user. While they aren’t a substitute for developer documentation, the entire architecture of POV-Ray is based around parsing inputs and geometric rendering of outputs, so many of the architectural decisions will be essentially “guessable” by examining the user interface.

Read the rest of this entry »

Attribute-Driven Design (ADD) is a process to developing a software architecture. It is presented within the context of the Evolutionary Delivery Life Cycle (EDLC), a software lifecycle based around architecture. The EDLC describes two feedback loops:

  • Concept – Requirements – Architecture and high-level Design
  • Iterative Development – Delivery – Feedback – Redesign from Feedback

The most critical part of the lifecycle that I see is that the iterative development is targetted at localized changes. The feature-requests that would be architectural (i.e. “no no, it has to run on this QNX cluster, not Linux!”) are expected to be taken care of prior to initial development. This is certainly more financially feasable given the cost of changing architectural concepts after development has started, but it does place a larger burden on the architect (get the high-level stuff right or the low-level stuff will be constantly struggling).

Read the rest of this entry »