Tuesday, July 21, 2009

Agile Development - Lean. Eliminate Waste

I am back from a nice vacation in Brazil and ready to write a little more about Agile development. Last article I gave a 30,000 feet view about Lean, today I am going to get one of the principles (Eliminate Waste) and get a little deeper.

Waste

What is waste in software development? I described that on my last post as "everything not adding value to the customer is considered waste. In other words: In software development, waste is anything that does not improve the quality of code, reduces the amount of time and effort it takes to produce code, or does not deliver business value to the customer. That is too generic.

On Poppendiek's book she describes complexity as a great candidate on promoting waste. "Complexity calcifies our code and causes it to turn brittle and break. The prescription for complexity in software development is simple: Write Less Code !!

Complexity

Just about any company that starts with a single technology is agile by definition. It can deliver and respond to the customer with easy. As the company grows and achieve success it starts to become slow, unresponsive. They have developed a complex code base, array of products and technologies. Unless they get complexity under control they will strangle what made them successful. The responsiveness, the agility.

The problem with complexity is that is does not grow linearly it does in a exponential fashion. Soon it starts to dominate all other costs in the software development process. Wise software development organizations place top priority on keeping the software clean, small and simple.

So, how to deal with complexity? Here are some food for thought

Justify Every Feature

The very first step on controlling complexity is limiting the features and functions that make to the software base. It is like a diet... you will be able to control your weight if you control what goes in your body. So, every feature that gets developed should pass to a rigorous evaluation and prove that it will create more economic value than the cost throughout the lyfecycle. Loading a software with a laundry list of feature is the lazy way to do marketing. Worse than that it is a recipe for disaster since the complexity will increase with useless features (not so valuable).

It takes guts to limit feature set but it mostly pays off. A company that delivers a product with just the right amount of features shows that it understands its customer.

Minimum Useful Feature Set

On agile the ideal approach do deliver software is dividing the product in small feature sets that immediately deliver value to the customer; and release these feature sets in descending order or importance. That way the customer can quickly start using the software. So, a minimum feature set is the one that helps the customer do a portion of its job faster.

From the sustainability point of view software developed on a incremental way is easier to maintain because incremental development can be sustained for the life time of the software. Of course other disciplines surrounds the incremental process and promotes the easy to change characteristics of the software. Refactoring supported by unit testing is one of them.

Don't Automate Complexity

Sometimes software companies solve complex problems by wrapping the existing process in a complex web of software. Never automate complexity. Understand the problem and simplify the process then, if necessary automate. In other words you domain the technology, you customer has the domain knowledge of the business. It is your job to simplify and automate (solve) that problem the simplest way possible.

What is Waste in Software development then?

Poppendieck define 7 wastes

Waste Description
Partially Done Work The inventory of software is partially done work. The objective is to move from the start of work on a system to integrated, tested, documented, deployable code in a single, rapid flow. The only way to accomplish this is to divide work in small batches, or interactions. Example of partial work is: uncoded documentation, unsynchronized code, untested code, undocumented code, undeployed code.
Extra Features Waste in software is adding features that are not needed to get customer's current job done. If there isn't a clear and present economic need for the feature, it should not be developed.
Relearning Rediscovering something we once knew and have forgotten is perhaps the best definition of "rework" in development. Knowledge needs to be captured and needs to be wide available to the development team. Another way to waste knowledge is to ignore the knowledge people bring to the workplace by failing to engage them in the development process. This is even more serious than loosing track of the knowledge generated. It is critical to leverage the knowledge of all workers by drawing on the experience that they have built up over time.
Handoffs

Passing knowledge in a strange science. You can document your architecture and give them to an engineer and most likely it will not be of much help. On the other hand if you sit with this person and be with him on the transfer, give him some pointers on some aspects of the software, testing, the history behind what was done, most likely, sooner your engineer will be able to ride by himself. This kind of knowledge is called tacit knowledge and it is very difficult to hand off using documentation.


When a work is handed off a vast amount of tacit knowledge is left behind in the mind of the originator. If each handoff leaves 50% of the knowledge behind, 3 % of the whole knowledge is left after 5 handoffs.


So, to minimize lost:



  • Reduce the number of handoffs,
  • Use design build teams (complete, cross-functional teams) so that people can teach each other how to ride the code.
  • Use high bandwidth communication
  • Release partial or preliminary work for consideration and feedback
Task Switching Concentrate on one tasks at a time and finish it before switching to another task. Switching to a different task is not only distracting , it takes time and often detracts from the result of both tasks. Task switching time is waste.
Delays Waiting for someone in order to have your work done is a big waste. That is why a process like scrum puts so much effort on maintaining the engineering unblocked. Again, communication plays a very important role on minimizing delays. Have your team collocated or collaboration tools that promotes open communication. Have your daily meeting and try to identify and correct any roadblock.
Defects

Every code base should include a set of mistake-proofing tests that do not let defects into the code, both at the unit testing level and acceptance level. Somehow software still finds devious ways to fail, so testing experts should start testing early and often to find as many of these unexpected errors as possible.

Whenever a defect is found, a test should be created so that it can never happen again. A good agile team has an extremely low defect rate, because the primary focus is on mistake-proofing the code and making defects unusual. The secondary focus is on finding defects as early as possible and looking for ways to keep that kind of defect from recurring.


Conclusion

We touched several subjects related to waste in software development and the text is self explanatory. I personally put a lot of importance of the defect aspect of building software. If we could "easily" convince engineer and QA that testing first enhances development speed, mistake-proof the code and helps on refactoring our lives maintaining and enhancing the software would be much easier.

In addition applying acceptance and unit testing has a deeper meaning than just mistaking-proof the code. Acceptance tests are best when they constitute the design of the product and match the that design to the structure of the domain. Unit tests are best considered the design of the code; writing unit tests before writing code leads to simpler, more understandable and more testable code. These tests tell us exactly and in detail how we expect the code and the ultimate product to work. As such, they also constitute the best documentation of the system, documentation that is always current because the tests must always pass.

Reference

Implementing Lean Software Development - From concept to cash. Mary and Tom Poppendieck

Wednesday, June 17, 2009

Agile Development - Lean

Last article I started mumbling about Agile and why companies were adopting Agile methods. The focus was on why projects were failing using traditional software development methods.

Until recently I was mostly focused on Scrum and extreme programming but I always heard good things about the Lean approach. So I went to the net and got a book that I recommend Lean software development by Mary and Tom Poppendieck(http://www.amazon.com/Implementing-Lean-Software-Development-Addison-Wesley/dp/0321437381/ref=sr_1_2?ie=UTF8&s=books&qid=1245442337&sr=8-2). This article is an attempt to summarize the ideas as I understood and give you guys started on the subject.

Lean Software Development

Let’s start with a more formal definition. Lean software development is a translation of lean manufacturing principles and practices to the software development domain. Adapted from the Toyota Production System, a pro-lean subculture is emerging from within the Agile community.

Lean Software Development is not a management or development methodology per se, but it offers principles that are applicable in any environment to improve software development.

So in that way the tenants or principles of this methods can be mixed with the process defined by Scrum for example. More and more I feel that there is not a right method but smart leaders or managers should be able to use the best of each approach adapting to the reality of his/her group. Here are the Lean principles:

1. Eliminate Waste

Basically everything not adding value to the customer is considered waste. In other words: “In software development, waste is anything that does not improve the quality of code, reduces the amount of time and effort it takes to produce code, or does not deliver business value to the customer.

2. Amplify Learning

Constant learning will minimize the numbers of problems/bugs. The best approach for improving a software development environment is to amplify learning. The creation of bugs should be prevented by running tests as soon as the code is written (TDD). Instead of adding more documentation or detailed planning, different ideas could be tried by writing code (Spikes).

For programmers to develop a system that delivers business value, they will have to learn about many things. Some are technical some are requirements but overall the leaning will benefit both.

3. Decide as late as possible

The development process is always associated with some uncertainty. By delaying decisions to the “last responsible moment” you are toying with the possibility of avoid change. In a nutshell the principal states that you should delay decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions.

4. Deliver as fast as possible

Deliver as fast as possible has a lot of relationship with the iterative SCRUM model. We are not talking about the whole product as fast as possible but verticals that deliver value to the customer.

If we can deliver quickly we can receive feedback quicker and therefore fine tune our direction with mode certainty. Constant cross correction is important on a agile method.

5. Empower the team

The quality of a software team (the people factor) is the most important element in successfully delivering software. In order to get people to take responsibility, get motivated, and gel as a team, they need to be responsible for the outcome and authorized to make it happen.

The pyramid leadership model is long gone. You can not do everything by yourself. You have to empower your team; give them responsibilities and of course make them accountable.

6. Build integrity in

The ever demanding Customer needs to have an overall experience of the System – this is the so called perceived integrity: how it is being advertised, delivered, deployed, accessed, how intuitive its use is, price and how well it solves problems.

Conceptual integrity means that the system’s separate components work well together as a whole with balance between flexibility, maintainability, efficiency, and responsiveness. This could be achieved by understanding the problem domain and solving it at the same time, not sequentially.

7. See the whole

Software systems nowadays are not simply the sum of their parts, but also the product of their interactions. Defects in software tend to accumulate during the development process – by decomposing the big tasks into smaller tasks, and by standardizing different stages of development, the root causes of defects should be found and eliminated.

Conclusion

I will continue talking about Agile on the next article (probably some basics in scrum or tools used in Lean) but the lesson I take from this is to understand the principles and slowly bring to the methodology being used in your team.

If you can’t convince your management that Agile is the way to go at least you can use some of the concepts (TDD, pair programming, sprints, etc.). I guarantee that picking at least one will bring benefits to the overall success you the team. Next time try Test Driven Development for a while and you will see the code quality improve.

Enough for today… until next time

Reference

Lean Software Development Overview

Lean software development (Wikipedia)

Friday, June 12, 2009

Agile Development

It has been a while since I wrote my last article. I am going to a transition period and trying to identify which will be the new area I will tackle.

Since I’ve always had my weakness on studying development processes I’ve decided to pause the technical articles and talk a little bit about Agile development.

On this particular article I will talk about why more and more companies are adopting some form of agile process.

Causes of Software failures

In most of unsuccessful projects the following factors are identified as causes:

Frequently and Rapidly Changing Customer Requirements

The problem with conventional software development such as waterfall is the assumption that customer requirements are static and can be defined by a predetermined system. It is interest that although throughout the years we slipped the schedule or had to make inhuman efforts to deliver the software on time we never stopped and realized that something was wrong with the process. That requirements were changing and we put out foot down and said you can’t change this because it is not in the functional specification.

Requirements change, and change frequently throughout the life of most systems, they cannot be adequately fulfilled by a rigid design. "Do It Right" has also been misinterpreted as "don't allow changes”. If the changes are not allowed customers are not satisfied. If the changes are allowed the company will have problems in delivering the software in compliance with the project base line.

Supreme control on one hands

At large the software development companies still follow the central control model where decisions “need” to come top down. This method actually increases the lead-time and makes the whole process rigid and slow.

Agile process favors a more democratic decision chain where the “soldiers/team” in the field are the ones responsible and accountable for the decision.

Rigid Project Scope Management

Holding the project scope to exactly what was envisioned at the beginning of a project offers little value to the user whose world is changing. In fact, it imparts anxiety and paralyzes decision-making, ensuring only that the final system will be partially outdated by the time it's delivered.

The way to build software today should be prepared for changes and allow the user to do so at any time. Of course changes in the requirement may incur changes in schedule or delivery order but nonetheless the goal is to deliver a final product molded accordingly to customer asks.

Traditional methods favors linear development

Most of the quality issues in the software components are also because of the linearity in the development process that does not allow the iterations and quality checks to occur before the components move to the next stage in the development cycle. So the development progresses even there are some quality issues ' Bugs'.

Conclusion

This article presented the common problems found in the most traditional development processes. Traditional development has always been rigid, not promoting requirements changes, democratic decision making and not engaging QA sooner in the process.

Next articles I will engage on talking about some of agile development that I was exposed: scrum, lean, eXtreme Programming. The goal is to present personal experiences in such methods exposing the good and the ugly hopefully open up the article for discussions.

Friday, April 17, 2009

NMock and TDD

I’ve always been an advocate for TDD although I think it is an uphill battle. I haven’t given up on having my group adopting TDD but I wish they were using it more and more.

Today I was writing some unit testing in a object model that had many dependencies and I notice the amount of Cra*(#$(. objects (fakes) I had to have in order to test a simple functionality.

Although I use unity to inject these instances it was still a bunch of static classes that didn’t do much. This monolithic approach was hard to change or adapt for different situations.

This situation made me leave my comfort zone and do some research with NMock. I have to tell you… I was hitting my head on my desk thinking why I did not use this before….

So I will spend some time talk about NMock and how you can benefit from using it in conjunction with TDD.

NMock 2.0

NMock is a dynamic mock object library for .NET. Mock objects make it easier to test single components—often single classes—without relying on real implementations of all of the other components. This means we can test just one class, rather than a whole tree of objects, and can pinpoint bugs much more clearly. Mock objects are often used during Test Driven Development.

A dynamic mock object:

  • takes on the interface of another object, allowing it to be substituted for a real one for testing purposes.
  • allows expectations to be defined, specifying how the class under test is expected to interact with the mock.
  • fails the test if any of the expectations are violated.
  • can also act as a stub, allowing the test to specify objects to be returned from mocked methods.

The best way to be Impressed is by Example

The scenario is a PermissionSystem that relies in a authenticated user with certain Permissions to access a resource. You want to test if a specific user can access the resource. The resource can be public, private or semi-private. For simplicity we will only present the following rules and we will only test a simple path:

  • A public resource
      • Anonymous user cannot post
      • Authenticated user can Post
  • Private Resource
      • Anonymous user cannot post
      • Authenticated can only post if it belongs to a role

The code presented is not functional only an example how you can setup expectations in your mocks in order to simulate scenarios.

The IUser interface

   1: interface IUser
   2: {
   3:     long Id {get; set;}
   4:     bool IsAuthenticated {get; set; }
   5: }



The Permission Service Interface





   1: public interface IPermissionService
   2: {
   3:     bool CanPerformTask(IContext context, long cid, string permission, string resource);
   4:     bool IsUserInRole(IRole role, IResource resource);
}





The main service would look like this (not really but .. you got the idea)





   1: public class PostService
   2: {
   3:     public PostService( IUser user, IPermissionService service )
   4:     {
   5:         ...
   6:     }
   7:  
   8:     public void Post( object post, IResource resource )
   9:     {
  10:         if( !user.IsAuthenticated )
  11:         {
  12:             throw new ApplicationException("User cannot post");
  13:         }
  14:         else if( resource.UserAccess == UserAccess.Public )
  15:         {
  16:             Post();
  17:         } 
  18:         else if( resource.UserAccess == UserAccess.Private )
  19:         {
  20:             if( service.IsUserInRole( "Authorized" )
  21:             {
  22:                 Post();
  23:             }
  24:             else
  25:             {
  26:                 throw new ApplicationException("User cannot post");
  27:             }
  28:         }
  29:     }
  30: }






The intent of the test is to verify if the behavior of Post method is correct when a user is authenticated but does not belong to the Authorized role.





   1:  [TestMethod()]
   2:          [ExcpetionException( typeof(ApplicationException))]
   3:          public void TryPostWithAuthenticatedUserUserNotInAuthorized()
   4:          {
   5:              Mockery mocks = new Mockery();
   6:              IUser loggedUser = mocks.NewMock<IUser>();
   7:              IPermissionService permissionService = mocks.NewMock<IPermissionService>();
   8:              IResource resource = mocks.NewMock<IResource>();
   9:   
  10:              Expect.Once.On(loggedUser)
  11:                     .GetProperty("IsAuthenticated")
  12:                      .Will(Return.Value(true));
  13:   
  14:              Expect.Once.On(permissionService)
  15:                  .Method("IsUserInRole")
  16:                  .WithAnyArgument()
  17:                  .Will(Return.Value(false));
  18:   
  19:              Expect.Once.On(resource)
  20:                     .GetProperty("UserAccess")
  21:                      .Will(Return.Value(UserAccess.Private));
  22:              PostService postService = new PostService(loggedUser, permissionService);
  23:              postService.Post(post, resource);
  24:   
  25:          }

  • line 5 we are creating the mock factory if you will. This object will be responsible for creating dynamic proxies for your interfaces.

  • Lines 6,7 and 8 . You are creating dynamic mock objects that mirror your interfaces.

  • The next step is to set expectations. When you code by intent expectations are very clear. For this test you expect the user to be authenticated and also that he is in a specific role called Authorized. In addition you want the resource to be private.



    • On the first example you are saying if the property IsAuthenticated is called in my mock return true.


    • On the second statement you are saying if the Method IsUserRole is called in my mock permission service with any argument return false.


    • And if the resource.UserAccess property is called return private.




Interesting in this construction is that your unit test not only tests the behavior of the method post but also the expectations or how post should relate to dependent objects. When you state Expect.Once.On you are saying that post method should only call this method or property once. If that rule is broken your unit test will fail.



At the end of the test you can still verify if all expectations were met. Ex. If IsUserInRole is not called at all you have a problem. Your test will fail.



Better than that. By changing one expectation you can test a totally different aspect of the Post method. Ex. Make IsUserInRole return true.



Here is a cheat sheet for Nmock that will get you going.



Conclusion



Programmers working with the test-driven development (TDD) method make use of mock objects when writing software. Mock objects meet the interface requirements of, and stand in for, more complex real ones; thus they allow programmers to write and unit-test functionality in one area without actually calling complex underlying or collaborating classes[6]. Using mock objects allows developers to focus their tests on the behavior of the system under test (SUT) without worrying about its dependencies. For example, testing a complex algorithm based on multiple objects being in particular states can be clearly expressed using mock objects in place of real objects.



Apart from complexity issues and the benefits gained from this separation of concerns, there are practical speed issues involved. Developing a realistic piece of software using TDD may easily involve several hundred unit tests. If many of these induce communication with databases, web services and other out-of-process or networked systems, then the suite of unit tests will quickly become too slow to be run regularly. This in turn leads to bad habits and a reluctance by the developer to maintain the basic tenets of TDD.



When mock objects are replaced by real ones then the end-to-end functionality will need further testing. These will be integration tests rather than unit tests.



If before you were able to write tests first and then code; now with Mocks things are much easier. You can start defining your interfaces and quickly you can write tests that reflects the intentions for the class you are testing.



By using mocks the creation process become more organic since you don’t need to have every single implementation of your interfaces in place in order to start testing and coding. You can always isolate the subject of tests and play around with your mocks.

Wednesday, April 1, 2009

Code Standard and its benefits

It is a fact of life -- developers like to write code in different styles. Show us code written by ten different developers and we can show you ten different coding styles. So why should we try to develop and enforce coding standards? – Nigel Cheshire and Richard Sharpe

Motivation

This week after coming back from vacation and reviewing some code I notice that although we, as a company, advocate code standards people still value “the so called speed” for code clarity and style.

That motivated me to write about the importance of code standard and the benefits the group gains if everyone adhered to it.

Introduction

Believe or not I still remember the time we programmed C in Unix and the pride we would take in knowing that we were the only ones to understand our own code (at least understand while we were working on it). Variables with obscure names (a1, c4, etc) added to the mystic of being a computer scientist.

Nowadays with the improvement of internet, communication, collaboration and other buzzwords that implies working together; code standards became a necessity; a tool to improve communication. The value is no longer in the mystic but code in such an elegant way that the code communicates.

So, why code standards? Code standards are important for many reasons. First and foremost, they specify a common format for source code and comments. This allows developers to easily share code, and the ideas expressed within the code and comments, between each other. Most importantly, a well designed standard will also detail how certain code should be written, not just how it look on screen.

More important than the reasons for having a standard is actually adhering to it consistently.  Having a coding standard documented and available means nothing if developers are not using it consistently.  If that becomes the case, it is no longer a coding standard, it has become a coding suggestion.  Worse yet are the developers that do not have a style or standard at all, and think that their lack of a style/standard IS a coding style/standard!

Style vs. Standard

It is important to differentiate between a style of coding and a standard. A code style specifies things like how you indent, tabs vs. spaces, use of “Hungarian” notation of names and variables, etc. A standard often includes a coding style, but goes beyond just hot to name a variable: it tells you how that variable should be treated and when it should (and should not be) used.

So it is clear that creating a standard is no easy task. Good news is that for us .NET programmers, the standard already exists and tools to enforce these standards are part of the Develop environment we use.

  • StyleCop - StyleCop analyzes C# source code to enforce a set of style and consistency rules. It can be run from inside of Visual Studio or integrated into an MSBuild project.
  • FxCop or Microsoft code Analyses -FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Many of the issues concern violations of the programming and design rules set forth in the Design Guidelines for Class Library Developers, which are the Microsoft guidelines for writing robust and easily maintainable code by using the .NET Framework.

These tools tend to be a pain, initially, when the programmer has so many bad habits that he/she will spend more time fixing their mistakes than coding. But as you become familiar to the standard you start to create good habits and program to the standard.

I personally tend to create snippets, little tools that helps me even more to code to the standard. Nowadays I know exactly what StyleCop will complain if I do wrong.

So what are the benefits you gain by adhering to a standard?

Benefits for Developers

  • The source code will be more comprehensive and will become more easy-to-maintain. As the programmers become familiar to the standard and use the supporting tools more and more they tend to program to the standard and produce better code.
  • The uniform approach for solving problems will be handy because code standards documents reveal recommended methods that were tried and tested on early projects.
  • Less communication between developers and managers will be needed because programmers will not ask anymore on the details of the specification document because the defaults are stated in the standard.
  • Is common to the less experience programmer to re-invent the wheel. When there are coding standards, there is a big chance that particular problem is not really a new problem, but in fact, a solution may be documented before.

Benefits for the Quality Assurance Team

  • Well documented coding standards will aid the creation of "Test Scripts". Having reviewed the source code and tested an application based on compliance to coding standards, it added strong direction to ensure quality of the software product.
  • Of course the readability of code improves QA understanding of functionality what improves the creation of automation or test scripts.

Benefits for Program Managers

  • It is important for the project managers to maintain and secure source code quality on their projects. Implementing coding standards could jumpstart this goal halfway to its realization.
  • Repeated performance pitfalls could be avoided. It is a common case that a released software product could be less impressive when it comes to performance when the real data has been loaded in the new developed database application.
  • Lesser man-hour consumption as the sum of all efforts implementing coding standards.

Adhering to Code Standard

As shown above adhering to a well designed code standard can give your software development an edge. The hard part is convincing developers to adhere to it. Often we hear from developers that adhering to a standard slows them down, the tool is too restrictive, standards take the “creativity” out of programming, etc.

It is understandable that in crunch time you could argue that. It is understandable but not excusable. Tools and techniques are out there to facilitate such techniques. Not adhering to it shows the lack of commitment and the desire to improve.

While I am not going to describe the various methods that can be used to achieve this goal, I will mention one: Accountability.

Accountability will generally cause developers to write better code.  If they wrote a bug, they should fix it.  If the bug would have been prevented by adhering to the standard, it should be brought to their attention.  The flip side of this is that they should be rewarded for writing good code.

The secret to code to a standard is to constantly use stylecop or other tools. You may be impacted initially but it will payoff for you and your team later.

Conclusion

Coding standards are being adopted by more and more development organizations. Estimates suggest that 80% of the total cost of a piece of software goes into maintenance, and not enough effort is being directed at ensuring that the quality is maintained during the development process. Further, development and Quality Assurance teams are spending too much time in code reviews, correcting coding standards violations that could be detected and in some cases corrected automatically.

Tool for code analyses and coding style are out there to help achieve this goals. The challenge is not in the technology but in educating developers on the benefits of adhering to it.

Friday, March 13, 2009

Design Patterns

We software professionals owe design patterns to an architect – Christopher Alexander. In 1970s, Christopher Alexander developed a pattern language with the purpose of letting individuals express their innate sense of design through a sort of informal grammar.

So what is a pattern? A design pattern is a known well-established core solution applicable to a family of concrete problems that might show up during implementation.  A design pattern is a core solution and, as such, it might need adaption to a specific context.

Using design patterns does not make your solution more valuable because at the end of the day the only thing that matters is that the application works and meet the requirements.

Patterns might be an end when you refactor according to them, and they might be a means when you face a problem that is clearly resolved by a particular pattern. Patterns are not an added value for your solution, but they are valuable for you as an architect or a developer looking for a solution.

Patterns vs. Idioms

Software patterns indicates well established solutions to recurring design problems. Sometimes specific features of a given programming language can help significantly in quickly and elegantly solving recurring problem. When a solution is hard coded on the language or implemented our of the box it is called an idiom.  

As far as the .NET is concerned, a set of idiomatic design rules exists under the name of Framework Design Guidelines. you access them online: http://msdn.microsot.com/en-us/library/ms229042.aspx you can also find useful information at http://blogs.msdn.com/kcwalina

Here are some examples:

  • Idiomatic Design: Strucutres or Classes? – The guidelione suggests that you always use a class unless the footprint of the type is below 16 bytes and the type is immutable.
  • Idiomatic Design: Do not use List<T> in public signatures? One of the reasons for this guideline is that List<T> is a rather bloated type with many members that are not relevant in many scenarios. This means that List<T> has low cohesion and to some extent violates the Single Responsibility Principle. Another reason is that the class is unsealed but not specifically designed to be extended. It is therefore recommended the you use IList<T> instead, or derived interfaces, in public signatures. Alternatively, use custom classes the directly implement IList<T>.

Dependency Injection

DIP has been the buzzword lately in my group as has been widely used as a pattern. DIP states that higher level modules should depend on abstractions rather than on concrete implementation of functionalities. Inversion of Control is an application of DIP that refers to situations where generic code controls the execution of more specific code and external components.

IOC resemble the template method pattern (http://en.wikipedia.org/wiki/Template_method_pattern) where you have a method whose code is filled with one or more stubs. The functionality of each stub is provided by external components invoked through an abstract interface. Replacing any external components does not affect the high-level method. Of course IOC is not applied to an specific method but throughout the code.

Today IOC/DI is often associated with special frameworks that offers a number of rather advanced features.

Here are some examples

Framework More Information
Castle Windsor http://www.castleproject.org/container/index.html
Ninject http://www.ninject.org/
Spring .NET http://www.springframework.net/
StructureMap http://structuremap.sourceforge.net/Default.htm
Unity http://www.codeplex.com/unity

Those are very interesting frameworks to take a look at it and get some ideas. I am personally familiar with Unity and Spring.

All IOC frameworks are built around container object that, bould to some configuration information resolve dependencies. The caller code instantiates the container and passes the desired interface as an argument. In response, the IOC/DI framework returns a concrete object that implements that interface.

I will spend some time on the next post talking about Unity and back to our pattern discussion.

Thursday, March 5, 2009

Design Principles And Patterns

I am dedicating this area to talk about anything related to software. My First series of posts will be related to software Architecture and Patterns.

This initial post will outline some basics principals that an architect should always have in the back of his/her mind. Then I will talk a bit about some OOD principles. As we talk about these topics, patterns, idioms and other subjects will come into play.

Bad Smells in the Code

There are certain things, “smells”, that I look for when doing code review. These smells separate good code from bad code.

  • Rigid, therefore fragile. A rigid code is basically the one that is resistant to change. This is measured in terms of regression. Ex. You fix a bug here and it breaks some place else. This kind of code (with hidden dependencies) is fragile.
  •  Easier to use than reuse. If you try to reuse the code in another project …. forget it !!!. The code has so many dependencies that it just does not work.
  • Easier to Work around than to fix. Many times you are facing this decision: Fix a problem in a elegant way, but lots of work, or hack it. When a hack is faster to apply than the real solution, engineers tend (due to schedule pressure or pure laziness) to take the easy road. Dino Esposito and Andrea Saltarelo mention in their book (Microsoft .NET Architecting Application for enterprise). “This aspect of design –that invites or accommodate workarounds more or less than fixes – is often referred as to viscosity”. High viscosity means the software resists to changes.

Structured Design

  • From Spaghetti to Lasagna code. I think this phrase says everything. Favor layered software it is reusable and modular.
  • Cohesion. Measures how strongly-related and focused the various responsibilities of a software module are. Target high cohesion modules they favor maintenance and reusability because they tend to have no dependencies. read Ward Cunningham (http://c2.com/cgi/wiki?CouplingAndCohesion). “He says that two modules , A and B, are cohesive when a change to A has no repercussion for B so that both modules can add new value to the system”.
  • Coupling. It measures the level of dependency existing between two software modules, such as classes, functions and libraries. Favor low coupling.

Advanced Principles

  • The Open/Close Principle. The idea is simple… produce code that can survive changes. So we need some mechanism that allow us to create new code without breaking existing code that works. Basically “A module should be open for extension but closed for modification”. Every time a change is required, you enhance the behavior of the class by adding new code and never touching existing old code that works.
  • Liskov’s Substitution Principle. That is an interesting one. Basically it states that “Subclasses should be substitutable for their base class”.

I am getting tired of writing so on the next article I will continue the discussion on software design. I will concentrate on Design patterns and code testability.

Cheers