Friday, September 28, 2012

Some Examples on Wabi Sabi Design


"Today’s problems come from yesterday’s solutions." - Peter Senge



I took three examples from my (previous) job to explain better what I mean with wabi-sabi design, that is unfinished and unpolished design.
The code I show here is as near as I can get to show you actual production code. I changed some stuff for obvious reasons and also to make it clearer. The problems and the solutions are real ones and the core is currently running in production.
Please also note that I'm using here the first person singular for all decisions but most of them were discussed with the all the team, still I was the Tech Lead so responsibility has been always mine.

Case #1: micro-services response


The problem:
Our architecture is based on micro-services implemented in Osgi. Think something like SOA but with internal services dynamically bound, so they can be started and stopped in any moment.
Since the servlet layer need to call the business logic, we needed to define an interface for all the responses.
I didn't want to expose the actual business logic classes to the the servlets because I thought they could change quite often and I didn't want to change the servlets each time.

The solution:
I decided to expose to the servlets the response already serialized with its type and hash (for the http Etag) and a status field to indicate any error during the request processing (http status errors).
So all services exposed by Business Logic are responding with this interface (or a subtype).
A similar interface have been created for the persistent layer services called by Business Logic.
My idea was also to collect and pass all the responses from the under layers to the uppers layer, so that we can know exactly what went well and what bad in each request.

public interface ServiceResponse {

    public int getStatus();

    public byte[] getResponseContent();

    public String getContentType();

    public String getContentHash();
    
}


Typical usage inside the servlet:

protected ServiceResponse prepareResponse(BLService contentService, 
ServiceRequest request) {
    
    ServiceResponse checkRequestResponse = checkForInvalidRequest(request);

    if (checkRequestResponse.getStatus() != SC_OK) {
        return checkRequestResponse;
    }

    try {
        return contentService.processRequest(request);
    } catch (Throwable t) {
        log.error("Unexpected error: " + t.getMessage(), t);

        return createErrorResponse(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, t.toString());
    }

}


protected void manageResponse(HttpServletResponse httpResp, BLService contentService, ServiceRequest request) throws IOException {

    ServiceResponse response = prepareResponse(contentService, request);
    int status = response.getStatus();
    httpResp.setStatus(status);

    String etagHeaderValue = response.getContentHash();
    if (!etagHeaderValue.isEmpty()) {
        resp.addHeader(HEADER_E_TAG, etagHeaderValue);
    }

    addMandatoryHeaders(httpResp, response);

    if (status != SC_NOT_MODIFIED) {
        copyResponseContent(httpResp, response, request.isGzipAccepted());
    }
}


The outcome:
For the first servlets the design worked well. But as soon as our requirements became more complex, with binary api, encoding, special cache management, this solution showed us all its limits.
We had to subclass the main interface ServiceResponse to add more data and for example the addMandatoryHeaders method has to check which headers add in the response according to these extended responses.
Now what strikes me as the main error of mine here, it's that there was no need at all to uniform all the service responses.
It would have made much more sense to define different interfaces for service which return binary data (say images), rather then collection of objects (in json or xml) or just responses for other web-services (like a proxy).
The idea to simplify the reality in an archetypical "Service Response" was doomed to fail.
But we know that design, especially software design, cannot be perfect. So let's ask us if this design which didn't work on the long term, at least gave us some advantages at the start. If it was the case, I wouldn't consider it a failure. I always prefer a bad design that helps me to deliver today, rather than a better one tomorrow.
Alas, here the answer is negative. Actually trying to abstract the "ideal response" forced us to change the code several times in order to have something generic enough for all the services.
At the end we never implemented my idea of collecting all the responses from under layers because well constructed logs solved the trouble-shooting problem in a much simpler way.
And by the way, while I think my original constraint is still valid (servlets shouldn't know about business logic classes) a much simpler solution would have been just mark them with an interface. In this way servlet layer could serialize them in xml or json.
We learned a lesson here: general abstractions are attractive but dangerous. Try to avoid them and look instead for business reality.

FAIL!




Case #2: properties keys


The problem:
Each feature (or sub-application) of the program needs several configuration properties. Many properties also have a related profile, so for example it's possible to define a whitelist of users for testing purpose which don't get authenticated on the backend, but this list is different according the calling client and country (here represented in the Profile class).
With the time passing we had more than one hundred of them in a monster .properties file plus several other files related to each profile.
This was getting near to be unmanageable and during the deploy there were often errors due to the merge of these property files.
Moreover some of the keys were obsolete now but to be sure it's safe to remove we need to do a full grep on all the code. Finally there is always the danger of some typo somewhere in the code that can create hard to fix bugs.

The solution:
Inspired by a post on the GOOS mailing list, we reorganized them using a special interface to keep together the information about the property key, then we defined every bunch of related properties in a special enum which implement the interface.
As soon as we started to replace the old System.getProperty() we realized we can also use the same interface to implement some more controls, like checking if the property has to be mandatorily defined and if it can be accessed with the profile or not.

public interface PropertyKey {
    public String get();

    public boolean isMandatory();

    public String get(Profile profile);
}


public enum StubsPropertyKey implements PropertyKey {

    STUBS_PATH("stubs.path"),
...;
        private String key;

        ProfilePropertyKey(String key) {
            this.key = key;
        }

        @Override
        public String get() {
            return key;
        }

        public boolean isMandatory() {
            return true;
        }

        public String get(Profile profile) {
            throw new RuntimeException("Property not sensible to Profile " + key);
        }
    }
}


public enum WhitelistPropertyKey implements PropertyKey {
    CONSUMER_WHITELIST("consumer.whitelist", false),
    BUSINESS_WHITELIST("business.whitelist", false),
...;

    private String key;
    private boolean mandatory;

    ProfilePropertyKey(String key, boolean mandatory) {
        this.key = key;
        this.mandatory = mandatory;
    }

    @Override
    public String get() {
        throw new RuntimeException("Profile is mandatory for property " + key);
    }

    public boolean isMandatory() {
        return mandatory;
    }

    public String get(Profile profile) {
        return profile.getPrefix() + "." + key;
    }
}

class Properties {

...

    private static String getProfileProperty(Profile profile, PropertyKey key) {
        String keyValue = key.get();
        boolean mandatory = key.isMandatory();
    
        String str = null;
        if (profile != null) {
            str = System.getProperty(profile.getPrefix() + "." + keyValue);
        }
        if (str == null) {
            str = System.getProperty(keyValue);
        }
        if (str == null && mandatory) {
            throw new MissingPropertyException("A mandatory property is not defined: " + keyValue);
        }
        return str;

    }
}


In the code we had to replace everywhere there was a System.getProperty somehow like this:

...
String whiteList = System.getProperty(profile.getPrefix() + ".consumer.whitelist");
String stubsPath = System.getProperty("stubs.path");
...

...
String whiteList = Properties.getProfileProperty(profile, WhitelistPropertyKey.CONSUMER_WHITELIST);
String stubsPath = Properties.getProperty(StubsPropertyKey.STUBS_PATH);
...


The outcome:
The first refactoring was quite boring and tedious: looking for all the System.getProperty() and creating the new properties enum.
Anyway we did one module at time and it didn't take more then one hour for module. While doing this we also found some minor possible bugs and not well managed error conditions. So not much wasted time actually.
After we finished the first refactoring, we realized it has become much easier to add and remove properties now. We also realized we could now add custom property types for specific values, like integers or comma separated list of strings, urls, dates etc. All with proper controls and friendly error messages. Moreover Unit-Tests became easier and less brittle using mocked properties instead than set and reset System ones.
As in all good redesigns, each further step of the solution emerged naturally as we finished the previous step.
After several "steps" of refactoring (sparse in many sprints), the design is still far from perfect: there are duplications around and not all types are correctly addressed. But that's not a problem, since it can be further refactored when necessity will arise.
The good decision here, I think, it has been to choose a simple solution to solve a clear and specific problem, making sure the it was also open to further improvements. Differently from the first example we didn't try to generalize the best solution for all the system at the beginning.
Just after a couple of hours of coding we started getting benefits. While if we decided to plan for a complete redesign of the property management, that would have required several days just for the analysis.

SUCCESS!




Case #3: xml mapping


The problem:
We needed to read some xml files "published" by an editorial tool to create our responses. The usual Java solution is to use some xml2java code generator like Axiom  or some xml mapping library like XStream to create Java Beans and then using them.
I was uncomfortable with this approach because I didn't want to share classes between modules (interfaces are ok) and not all our XMLs were related to business objects but some were more similar to configuration files.
Now the problem was how to pass the information if we are not going to use beans? One choice, soon discarded, would have been passing the whole xml as string and then mapping it in the business logic module, a better one would have been passing the DOM tree.
The problem with DOM tree is that it's a quite complex structure and there are a bit of incompatibilities between DOM document produced by different libraries (btw there are way too many of them in Java world).
Another important consideration was that in that period we were under pressure to deliver soon.

The solution:
I decided to implement the simplest thing that work. Where simple here means "simple to understand and integrate" and not "with as little as possible code written". In the persistence layer we translated the DOM in a map of strings, storing each xml node in a map using Xpath as key. Because our map had to be immutable, rather than using java.util.Map, I decided also to create a new minimal interface with only the two methods we actually needed:

public interface MetadataMap {

    public String get(String key);

    public Set<String> keySet();

}


In some case the map was not enough and we added a Map of MetadataMaps:

public interface MapOfMetadataMaps {

    MetadataMap get(String key);

    Set<String> keySet();
}



This did well for the first sprint, but later we needed to keep in consideration also xml attributes for cases like this:


<files profile="GB" published="20120708">
    <file height="30" mimetype="PNG" width="30">a73b7db6ae258646a6041184a871a6d1</file>
    <file height="40" mimetype="PNG" width="40">de487ae39ad75dde6d67dee836fe30f7</file>
    <file height="60" highcontrast="true" mimetype="PNG" width="60">0ee1149c303b9e583b9bddb1954460c8</file>
    <file height="80" highcontrast="false" mimetype="PNG" width="80">7e05a6ab1035b457d9c338d52208e834</file>
</files>


So we added a new method to return together with the value also its attributes:

public interface MetadataMap {

    public String get(String key);

    public AttributesInformation getWithAttributes(String key);

    public Set<String> keySet();

}
public interface AttributesInformation {

    String getValue();

    String getAttribute(String attributeName);

}


Used like this:

...
for (String imageKey : imagesMap.keySet()) {
    AttributesInformation imAttr = imagesMap.getWithAttributes(imageKey);

    String width = imAttr.getAttribute("width");
    String height = imAttr.getAttribute("height");
...



Which was ok for the time being but it was clearly indicating that some xml at least needed to be translated in a more structured entity.

So the next step, implemented in the next version when we had some time, has been to create Entity interfaces and then implement them on the top of the map.

For example this xml:
<devices>
    <device>
      <id>HTC_Legend</id>
      <name>Legend</name>
      <tacs>
        <tac>35208304</tac>
        <tac>35431603</tac>
        <tac>35577904</tac>
        <tac>86038200</tac>
      </tacs>
    </device>
</devices>


Will be used by the new Entity map like this:


public interface MetadataEntityMap<T extends Entity> {

    public T get(String key);

    public Set<String> keySet();
}



public interface Device extends Entity implements MetadataMap {

    public String getName();

    public List<String> getTacs();

}


private static Device createDeviceFromMap(final MetadataMap deviceMap) {

 return new Device() {

         private MetadataMap map = deviceMap;

         @Override
         public String getName() {
             return map.get("name");
         }

         @Override
         public List<String> getTacs() {
             return createStringListFromEntries(map, "tacs");
         }

         @Override
         public String getId() {
             return map.get("id");
         }

         @Override
         public String get(String key);
         {
             return map.get(key);
         }
 ...
    }


This was really done in half an hour and worked flawless from the fist moment. In the following days we "upgraded" all the other references so that finally we were able to remove the MetadataMap implementation:


private static Device createDeviceFromXmlElement(final Element e) {


       Device device = new Device() {
           final String id = e.selectSingleNode("id").getText();
           final String name = e.selectSingleNode("name").getText();
           final List<String> tacs = createStringListFromNodes(e.selectNodes("tacs/tac"));

               @Override
               public String getName() {
                   return name;
               }

               @Override
               public List<String> getTacs() {
                   return tacs;
               }

               @Override
               public String getId() {
                   return id;
               }
   ...
       }
}



The outcome:
At the end of the day for some xml files the MetadataMap representation was still working fine, while other were easier to use with a map of value objects .
What started as a simple hack to allows us to deliver in time, it really became an asset of our software. Each time there was a new version we were asked to move some xml files, add new ones and make others more complicated. This would have been a nightmare with the usual Java Bean approach, but with our solution we were able to quickly change the code to handle the new requirements only in the business logic, without impacting the persistence layer at all. In the same time we did improve the design so following changes became still faster to implement.

SUCCESS!



ps. I'll be grateful if you report me any typo or grammatical error you happened to notice in my posts.

Monday, September 24, 2012

William Morris and the Red House

Yesterday I visited the Red House in Bexleyheath, designed by William Morris.

There is a lot of discussion about Software Craftsmanship Manifesto, but only a few developers know about the original Arts and Craft movement and its founder.

The house, sadly, is only a shadow of what it should have been when the Morrises lived there. Nevertheless, walking inside you can look and feel how everything was planned for a reason and beautifully crafted. This kind of incredible care for details, it really gives you a feeling of the meaning of Craftsmanship.

This visit made me very excited but also very humble about Software Craftsmanship. Can we ever hope to reach those levels in our job?

In the living room, over the fireplace was inscribed Morris' motto:
ARS LONGA VITA BREVIS (there are so many things to learn and master and time is fleeting)
Which I think is a nice thing to be reminded of when watching the television (actually I know that it was not invented yet in Victorian age).

The garden is also a piece of beauty, but yesterday it was pouring rain and we cut short the walk in the garden.

Incidentally, although very different, I think there is a common feeling between Japanese Wabi-Sabi and Victorian Arts and Crafts. Simple, unimposing, natural are adjectives that can be applied for both. Can they be applied to good code too?

Now some pictures:








Wednesday, September 19, 2012

Wabi Sabi Design


This month I reached another turning point of my career. After more than 2 years I left the role of technical lead for a big project in a huge enterprise to start a new career in a new nimble and dynamic company.

Looking back at these crazy months I decided to do a kind of public summary of my experience.
In this period I refined my design and refactoring skills while fighting against insane requirements on a very big code base. On top of this we were using a technology (Osgi) which I didn't know at all at the beginning.
In these 2 years, I took many design decisions: some worked well, some worked bad and a few worked much better than I anticipated.

Now for sure it's easy to be wise after the event, but I asked myself if there was some common fact shared among all my bad decisions (or good decisions for what it matters)...
Well, after the analysis my answer is no. At least I'm not able to discern anything like that.

But the real big lesson has been that I don't believe anymore in such a thing like the final ultimate super-duper design. Sprint after sprint, I saw things that seemed very good solutions at the beginning became a nightmare to change and maintain. Instead many half-baked designs I was a bit ashamed of, with time proved to be rough gems, very easy to change and adapt for new requirements.

As time and sprints passed, I found this fact affected the way I write my code. I didn't aim anymore for a finished, polished design, instead I go for something that I call "wabi-sabi design".

---

In Japanese culture wabi-sabi is a concept derived by Buddhism. Literally it means something like "sober and patinated", but we can translate as "The beauty of things imperfect" or "refined elegance".

A picture is worth more a thousand words so here we have 3 top examples of wabi-sabi.
The imperial villa of Katsura (compare it to the contemporary Hampton Court Palace)

A Raku-style tea bowl (compare it with a Meissen tea cup)


A tea-house (no comparison here)


Incidentally note that although it seems very similar to modern design, wabi-sabi is very different from Minimalist ideals for example. The goal here is not to eliminate decoration but to embrace the impermanence of all things.

---

Now the if there is something that reminds me of impermanence, that's our requirements!
So what does this means in practice? well it's hard to explain but the idea is to let you design to be solid on the fundamentals but unfinished on the details.
A quick check list could be:
  • Don't over-refactor it.
  • Don't impose wide-scope design constrains.
  • Don't lose flexibility pursuing perfection!

I think there are at least three best-practices in the agile zoo which have a strong relation with wabi-sabi design.

One of the best advice I ever received about software architecture came from Mary Poppendieck at the XP conference in 2005. She explained how the best moment to take a decision is always the Last Responsible Moment. Practicing LRM taught me the virtue of patience and the art to design open solutions. In this way you can defer to take a decision about facts which are still unclear.
But stil, until recently, I considered LRM as a temporary thing. Surely there will be a moment in which everything is clear and defined and we can took all the decisions once for all!
Well, I realised that in all mine 18 years of experience I never met such moment. Unless when a project is cancelled, that is.
So are we designing for the moment of the project cancellation?
I think what LRM is really telling us, is to start to appreciate the unfinished design now, because there will never be a finished design in an lively project.

Wabi-Sabi design works very well also with YAGNI, as you can imagine. The easiest way to keep things simple is not to start at all things that you don't really need. How many times do developers write lots of complex code to solve complex problems which don't exist outside their (brilliant) minds?
Or on the other side how many times I heard programmers say that it's not possible to implement a new feature because "the architecture doesn't allow it". More often than not, this means only that someone imagined in the past how the new feature would be, but the reality had different ideas.

Finally another interesting affinity is with the Open Closed Principle (the O in SOLID). The tricky part of OCP is always what to close and what to keep open, but there is no doubt that the more a design is "polished" the more is closed. Keeping classes small, simple and unfinished make them also very open to new extensions.
Usually the is the very last refactoring, the one about "I need to remove that little imperfection" which creates big headaches and makes our software brittle.

---

There are two possible critiques regarding wabi-sabi design:

  • Isn't this wabi-thingy just a way to justify the fact that you are unable to write clean code?
  • If everything change, why bother to design at all? Let's write and rewrite spaghetti code every time they change idea.

I think the first is a potential risk, while the second is a viable option for some kind of projects but it doesn't scale at all for big projects.

But rather than reply directly I'll try to explain what I mean in the next post with a couple of the real code examples.


Stay tuned!