vrijdag 19 juni 2009

WinForms: DataBinding on a cancellable Dialog Form

On some occasions, you might want to create a dialog window that enables a user to modify data.

In almost all situations, you will want to make use of the databinding capabilities in the .NET Framework, so that you do not have to write the tedious synchronization-code yourself.

However, when you want to databind a custom object to the controls on the dialog-form, databinding might get in the way:
When you change the Text in a TextBox that is databound on a property of your object, the property of your object will be updated as soon as the databound TextBox has been validated (default behaviour).
This is not a problem, until the user decides that he does not want to keep the changes he has made, and clicks the Cancel button of the dialog form.
In such a case, you want to revert back to the original state of the object.

There are several solutions to handle this problem.

One solution is to implement some kind of Undo-behaviour in your custom object, but this might be a little bit overkill for the situation at hand.

A more simple solution, is to disable the 2-way databinding when the form is displayed, and explicitly update the databound object when the user clicks the Ok button of the dialog.

This can be done like this:

public class MyEditForm : Form
public MyEditForm( MyClass objectToDisplay )
// Suppose that you have defined databindings via the
// designer
this.MyBindingSource.DataSource = objectToDisplay;


public void btnOk_Click( object sender, EventArgs e )
this.DialogResult = DialogResult.Ok;

public void btnCancel_Click( object sender, EventArgs e )
this.DialogResult = DialogResult.Cancel;

The concept is very simple; when the form is displayed, we suspend the databindings, so that changes that are made to the controls on the form, are not directly persisted in the underlying object.
When the user confirms the changes he has made by clicking the OK-button, we need to make sure that the contents of the controls are persisted in the underlying object.

The two methods that are of interest look like this:

public static class DataBindingUtils
public static void SuspendTwoWayBinding( BindingManagerBase bindingManager )
if( bindingManager == null )
throw new ArgumentNullException ("bindingManager");

foreach( Binding b in bindingManager.Bindings )
b.DataSourceUpdateMode = DataSourceUpdateMode.Never;

public static void UpdateDataBoundObject( BindingManagerBase bindingManager )
if( bindingManager == null )
throw new ArgumentNullException ("bindingManager");

foreach( Binding b in bindingManager.Bindings )
b.WriteValue ();

vrijdag 15 mei 2009

Forcing NHibernate to cascade deletes before updates

At work, I've ran into a particular problem; let me describe the situation at hand.

Suppose I have the following DB schema:

In the OrderLines table, there exists a UNIQUE CONSTRAINT on ( OrderId, SequenceNumber)

Next to this DB schema, I have an Order and an OrderLine class.
The Order class has a collection of OrderLines:

public class Order
public int Id
private set;

public ISet OrderLines = new HashedSet();

public void AddOrderLine( OrderLine ol )
ol.Order = this;
OrderLines.Add (ol);

I've them mapped using NHibernate so that I can save them in the above DB schema.
In the mapping, I've specified that the OrderLines collection should be cascaded when the Order is saved:

<set name="OrderLines" cascade="all-delete-orphan">
<key column="OrderId" />
<one-to-many class="OrderLine" />

Now, all goes well until you remove an OrderLine, add a new OrderLine with the same sequencenumber as the one that you've just removed.

Instead of first removing the OrderLine that you want to remove, and inserting the new OrderLine afterwards, NHibernate will perform these actions just the other way around:

It will first try to insert the new OrderLine, and then it will remove the existing Orderline.
Now, since we have a unique constraint in the DB, this will fail offcourse.

I've found a way to work around this problem. Although I find it not optimal (rather an ugly hack), it kinda works, so I'll stick with it for now.
This is how I've done it:

  • Instead of specifying 'all-delete-orphan' as the cascade option for the collection, I've modified it to 'delete-orphan' instead.
    This means that deletes will be cascaded, but inserts & updates are not

  • All DB access goes through 'repositories' in my application. This means I have an OrderRepository which has a Save method.
    I have modified my Save method so it looks like this:

    public void Save( Order o )
    // With.Transaction only starts a transaction when the given session
    // is not in a transaction yet.
    // session is an ISession and is a member variable of my Repository.
    With.Transaction (session, new delegate()
    session.SaveOrUpdate (o);


    foreach( OrderLine ol in o.OrderLines )
    session.SaveOrUpdate (ol);
    What I do here, is just calling SaveOrUpdate on the given Order. Because of the cascading setting, only the OrderLines that are to be deleted, will be cascaded.
    Afterwards, I call the Flush method of the session to make sure that the DELETE statements are actually send to the database.

    We're now left with the OrderLine entities that are new or modified. To make sure that they get persisted as well, I loop over the OrderLines collection and call SaveOrUpdate for every OrderLine instance.
    This will make sure that new OrderLines get inserted and modified ones, are updated.
    NHibernate will not update those OrderLines that are not changed.

Although this 'hack' is nicely hidden / abstracted by the Repository, I still find it a bit ugly, but at this very moment I see no better way to handle this kind of issue ...

dinsdag 2 september 2008

Google Chrome

The beta-version of Google's Chrome browser is available for download. I've just installed it, and I like it.
I love their new 'start-page' concept, for instance.

They've also created a comic where they explain the concepts and techniques of Chrome. Very interesting as well. :)

maandag 25 augustus 2008

On reading books ....

Davy Brion has made a statement on his blogs in where he states that reading certain development books should be considered as an investment for a software developer.

I fully support that statement.
I have a few books on my shelf that I consider as 'my Software Development Bibles'. These books have -imho- sharpened my skills, broadened my view and helped me to be a better developer.
These books -which I consider to be my bibles- are, in no particular order:

There are lots of other books on software development on my shelf, but I consider the ones above as the ones that have influenced me most.
Books, you can't get enough of them (you still have to read them as well offcourse). There are still some books regarding software-development on my whishlist, and I'm sure that every now and then, another book will be added to it...

dinsdag 19 augustus 2008

Locking system with aspect oriented programming


A few months ago, I had to implement a 'locking system' at work.
I will not elaborate to much on this system, but it's intention is that users can prevent that certain properties of certain entities are updated automatically;
The software-system in where I had to implement this functionality, keeps a large database up-to-date by processing and importing lots of data-files that we receive from external sources.
Because of that, in certain circumstances, users want to avoid that data that they've manually changed or corrected, gets overwritten with wrong information next time a file is processed.

The application where I'm talking about, makes heavy use of DataSets and I've been able to create a rather elegant solution for it.
At the same time, I've also been thinking on how I could solve this same problem in a system that is built around POCO's instead of Datasets, and that's what this post will be all about. :)

Enter Aspects

When the idea of implementing such a system first crossed my mind, I already realized that Aspects Oriented Programming could be very helpfull to solve this problem.

A while ago, I already played with Aspect Oriented Programming using Spring.NET.
AOP was very nice and interesing, but I found the runtime-weaving a big drawback. Making use of runtime weaving meant that you could not directly create an instance using it's constructor.

So, instead of:

MyClass c = new MyClass();
you had to instantiate instances via a proxyfactory:
ProxyFactory f = new ProxyFactory (new TestClass());

f.AddAdvice (new MethodInvocationLoggingAdvice());

ITest t = (ITest)f.GetProxy();

I am sure that you agree that this is quite a hassle, just to create a simple instance. (Yes, I know, offcourse you can make abstraction of this by making use of a Factory...).

Recently however, I bumped at an article on Patrick De Boeck's weblog, where he was talking about PostSharp.
PostSharp is an aspect weaver for .NET which weaves at compile-time!
This means that the drawback that I just described when you make use of runtime-weaving has disappeared.
So, I no longer had excuses to start implementing a similar locking system for POCO's.

Bring it on

I like the idea of Test-Driven-Development, so I started out with writing a first simple test:

The advantage of writing your test first, is that you start thinking on how the interface of our class should look like.

This first test tells us that our class should have a Lock and an IsLocked method.
The purpose of the Lock method is to put a 'lock' on a certain property, so that we can avoid that this property is modified at run-time.
The IsLocked method is there to inform us whether a property is locked or not.

To define this contract, I've created an interface ILockable which contains these 2 methods.
In order to get this first test working, I've created an abstract class LockableEntity which inherits from one of my base entity-classes implements this interface.
This LockableEntity class looks like this:

This is not sufficient to get a green bar on my first test, since I still need an AuditablePerson class:

These pieces of code are sufficient to make my first test pass, so I continued with writing a second test:

As you can see, in this test-case I define that it should be possible to unlock a property. Unlocking a property means that the value of that property can be modified by the user at runtime.
To implement this simple functionality, it was sufficient to just add an UnLock method to the LockableEntity class:


Simple, but now, a more challenging feature is coming up.

Now, we can already 'lock' and 'unlock' properties, but there is nothing that really prevents us from changing a locked property.
It's about time to tackle this problem and therefore, I've written the following test:

Running this test obviously gives a red bar, since we haven't implemented any logic yet.
The most simple way to implement this functionality, would be to check in the setter of the Name property whether there exists a lock on this property or not.
If a lock exists, we should not change the value of the property, otherwise we allow the change.
I think that this is a fine opportunity to use aspects.

Creating the Lockable Aspect

As I've mentionned earlier, I have used PostSharp to create the aspects. Once you've downloaded and installed PostSharp, you can create an aspect rather easy.

There is plenty of documentation to be found on the PostSharp site, so I'm not going to elaborate here on the 'getting started' aspect (no pun intended).

Instead, I'll directly dive into the Lockable aspect that I've created.

This is how the definition of the class that defines the aspect looks like:

Perhaps I should first elaborate a bit on how I would like to use this Lockable aspect.

I'd like to be able to decorate the properties of a class that should be 'lockable' with an attribute. Like this:

Decorating a property with the Lockable attribute, means that the user should be able to 'lock' this property. That is, prevent that it gets changed after it has been locked.
To be able to implement this, I've created a class which inherits from the OnMethodInvocationAspect class (which eventually inherits from Attribute).

Why did I choose this class to inherit from?
Well, because there exists no OnPropertyInvocation class or whatsoever.

As you probably know, the getters and setters of a property are actually implemented as get_ and set_ methods, so it is perfectly possible to use the OnMethodInvocationAspect class to add extra 'concerns' to the property.

This extra functionality is written in the OnInvocation method that I've overriden in the LockableAttribute class.

In fact, it does nothing more then checking whether we're in the setter method of the property, and if we are, check whether there exists a lock on the property.
If there exists a lock, we won't allow the property-value to be changed. Otherwise, we just make sure that the implementation of the property itself is called.
The implementation looks like this:

Here, you can see that we use reflection to determine whether we're in the setter-method or in the getter-method of the property; we're only interested if this property is locked if we're about to change the value of the property.

Next, we need to get the name of the property for which we're entering the setter method. This is done via the GetPropertyForSetterMethod method which uses reflection as well to get the PropertyInfo object for the given setter-method.

Once this has been done, I can use the IsLocked method to check whether this property is locked or not.

Note that I haven't checked whether the conversion from eventArgs.Delegate.Target to ILockable has succeeded or not. More on that later ...

When the property is locked, I call the OnAttemptToModifyLockedProperty method (which is declared in ILockable), and which just raises the LockedPropertyChangeAttempt event (also declared in the ILockable interface). By doing so, the programmer can decide what should happen when someone / something attempts to change a locked property. This gives a bit more control to the programmer and is much more flexible then throwing an exception.

When the property is not locked, we let the setter-method execute.

With the creation of this aspect, our third test finally gives a green bar.

Compile time Validation

As I've said a bit earlier, I haven't checked in the OnInvocation method whether the Target really implemented the ILockable interface before I called methods of the ILockable type.

The reason for this , is quite simple: the OnMethodInvocationAspect class has a method CompileTimeValidate which you can override to add compile-time validation logic (hm, obvious).

I made use of this to check whether the types where I've applied the Lockable attribute really are ILockable types:

Note that it should be possible to make this code more concise, but I could not just call method.DeclaringType.GetInterface("ILockable") since that gave a NotImplementedException while compiling. Strange, but true

Now, when I use the Lockable attribute on a type which is not ILockable, I'll get the following compiler errors:

Pretty neat, huh ?
Now, what's left is a way to persist the locks in a datastore, but that will be a story for some other time ...

maandag 28 juli 2008

NHibernate in a remoting / WCF scenario

I am thinking on how I could use NHibernate in a remoting scenario (using .NET remoting, webservices, WCF ... ), but I can already see some problems which I will likely encounter on my path.

This is how I see the big picture of the application:

Let me explain it in short:
The client application (a rich Windows client for instance) communicates via some kind of technique, be it WCF or the old .NET remoting, with the Service Layer.
This means that the client application calls a (remote) method on the Service Layer to retrieve a Customer for instance. The client can make some changes to that object and later, the client can call the remote 'SaveCustomer' method so that the Service Layer can persist the changes back to the datastore.
In order to do this, the Service Layer uses a Repository that uses NHibernate to retrieve or persist objects.
Note that the Client Application and the Remote service layer use the same Domain Entities. This means that the domain classes need to be [Serializable].

The problem that I will be facing is this:
- Since (N)Hibernate uses its ISession as a UnitOfWork, which keeps track of the objects that have been created, deleted, inserted, the Client Application doesn't know whether it is necessary to perform a remote call to save the entity or not.
(The client application doesn't know anything of some thing called an 'NHibernate Session', and my business object (entity) has no state tracking as well. (In other words: my entity itself doesn't know whether it has been created, changed or deleted).

- The remote method which will save my entity, will use another ISession then the method that has retrieved it. (Remote methods should be stateless, since multiple callers can call the same method. Client x should not know anything of client Y).
The fact that the 'SaveCustomer' method will use another ISession, means that it is possible that NHibernate will perform unnecessary UPDATE statements. This could be problematic if you use an AuditInterceptor, since this Interceptor will update the LastUpdated, Version, etc... columns in the DB, while this was not necessary. In other words: this leads to wrong information in the database.

How could these problems be tackled:
- For the first problem, you could implement some kind of 'state tracking' in your entities, and add a property which tells you whether the entity has modified , etc...

- Implementing state-tracking in your domain entities may also solve the 2nd problem; in your repository you can check whether you've to Update or Save (for new entities) your entity. However, I don't know yet how this will behave in situations where an entity contains a collection of other entities ...

I'd like to know from other people how they have tackled these kind of problems ? Did you implement some kind of state tracking in your business entities ?
Or, did you choose not to expose your business entities to the client application, and use Data Transfer Objects instead ? If so, how did you map these DTO's to your business classes ?

zaterdag 5 juli 2008

NHibernate IInterceptor: an AuditInterceptor

As I was playing around with NHibernate today, I came accross a rather inconvenient problem. :).

Let me first explain what I wanted to achieve:
For every domain object that I save, I want to persist in the database when the entity has been created, when it has been last updated and by whom. Nothing special, just regular audit-information.

To make this all possible, I've created the following classes / interfaces:

  • IAuditable interface

  • AuditableEntity interface

I think this is pretty straightforward and doesn't require any further explanation.
Then, I continued with creating an NHibernate interceptor which would set the Created and Updated dates. (I could also used the ILifecycle interface instead, but this meant that I would have a dependency to the NHibernate assembly in my 'domain classes assembly', and I don't like that. In fact, the ILifecycle interface has been deprecated for exactly that reason).

This is an extract from my AuditInterceptor which would perform the task I wanted (at least, I thought so ... ).
(Note that my AuditInterceptor is NOT in the same assembly where the IAuditable, AuditableEntity and other domain base class reside in. This would create a dependency from my base classes to NHibernate and again, I hate this :) ).

The AuditInterceptor (snippet):

As you can see, it is very simple: I only had to implement 2 methods of the IInterceptor interface:

  • OnSave, which is called when an entity is saved for the first time in the database (INSERT)

  • OnFlushDirty, which is called when an existing entity is dirty and has to be updated
What I do, is check whether the entity that is to be saved implements the IAuditable interface, and if so, I just set the necessary properties (Created and Updated) to the appropriate values (the current DateTime).

Easy enough, simple, understandable and clean... If only this would work...
During testing, I got the following exception:

  ----> System.Data.SqlTypes.SqlTypeException : SqlDateTime overflow. 
Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.
at NHibernate.Persister.Entity.AbstractEntityPersister.Insert(Object[] fields,
Boolean[] notNull, SqlCommandInfo sql, Object obj, ISessionImplementor session)

As it turns out, NHibernate doesn't 'see' the changes you make to the entity parameter that is passed to the Interceptor methods:

You can however, change the values that are in the state array parameter. Then NHibernate will correctly persist the changes.

But, I do not like to 'hard-code' property names as strings for obvious reasons (if you change a property, the compiler will not detect that you should change your 'hardcoded property name string', etc...).

Anyway, in order to get my interceptor to work, I have no other choice then messing around with the propertyNames[] and state[] parameters.
In order to get rid of the 'weak-typing', I added a little bit more code.
So, now my classes look like this:

  • IAuditable interface

  • AuditableEntity class

  • AuditInterceptor

This solution is, IMHO, elegant enough to live with, and it works.

However, maybe someone else has a better, more elegant solution for this ? If so, I'd like to hear from you ...

dinsdag 1 juli 2008

NHibernate Session Management

I know that there has been written a lot about this topic, but somehow, I haven't found the 'sweet spot' concerning NHibernate Session Management in WinForms applications yet.

Some time ago, I've created a simple abstraction around the NHibernate ISession which would make it easier to use the ISession in my Winforms application.

Why do I want to clutter my presentation layer with NHibernate stuff, you ask ? Because Context is King.
The Repository has no notion of transactions, since the Repository doesn't know the context in where it's used.
Therefore, I like to start my Transaction in my WinForm app for instance, and pass the 'Transaction' to my repository, like this:

In the code above, the UnitOfWork class is just a simple wrapper around the NHibernate ISession which allows me to start and commit or rollback a transaction, disconnect the ISession from the Database, etc... with a minimum amount of code.

The UnitOfWork class looks like this:

This approach also allows me to have multiple NHibernate ISessions opened in one application instance.
This approach also gives me full control about when to start a new UnitOfWork, and when to close a UnitOfWork.

I've been convinced that this was the way to go. Especially because I thought that you had to commit the changes you've made to an object using the same ISession as the ISession which you've used to retrieve the object if you want to avoid unnecessary SELECT statements.

But, thanks to my collegue Thierry (who's starting to use NHibernate as well, and acted as some kind of catalysator to me so that I picked up my NHibernate quest again), it seems that my assumptions where not true:
I thought that, when you save an object to the datastore, using another ISession then the ISession you've used to retrieve the object, NHibernate would first perform a SELECT query in order to find out whether an INSERT or an UPDATE statement should be executed.
This seems to be false if you do not use the 'assigned' generator class for your Id property.

So, now I'm in doubt:

  • do I really need to be able to have concurrent ISessions in the same application instance ? Until now, I haven't needed it yet (so, yes, that makes it a YAGNI in fact).

  • I haven't seen anyone on the net using a similar approach. I see that everyone uses some kind of 'SessionManager' like the one Billy McCafferty has written here, so this makes me doubt as well ...

This last point is also the reason for this blogpost: I'm in doubt :)
Using some kind of 'SessionManager' class allows me to do the transaction demarcation where ever I want as well. Next to that, I also do not have to pass my UnitOfWork to the repository, since the repository has access to the current Session via the SessionManager as well ...

I know that, maybe, I should just give it a try. However, I'd like to hear experiences and thoughts of other people who are using (N)Hibernate in a Rich Client environment as well.
How are you dealing with those (session management) issues ? What difficulties did you encounter ?

Note: another post of me regarding this subject can be found here

assumptions are the mother of all fuckups.

maandag 30 juni 2008

New Layout

I've changed the layout of my weblog, I hope you like it.

If you have any remarks regarding the layout, if you don't find it readable, if you miss something, please let me know.

vrijdag 13 juni 2008

Setting Up Continuous Integration
Part II: configuring CruiseControl.NET

Now that we've created our buildscript in part I, it's time to set up the configuration file for CruiseControl.NET

The ccnet.config file and multiple project-configurations

The tasks that CruiseControl.NET should execute for your project, are configured in the ccnet.config file.
The ccnet.config file can contain multiple project configuration blocks. However, I like to have each project-configuration in it's own, separate file. In my opinion, this is much more manageable.

In order to put each project-configuration in its own XML file and import it in the ccnet.config file, you can make use of DTD entities to substitute constants with the contents of other XML files.
This is how I've done it:

<!DOCTYPE cruisecontrol [
<!ENTITY project1 SYSTEM "file:D:\folder\project1_ccnet.xml.config">
<!ENTITY project2 SYSTEM "file:D:\folder\project2_ccnet.xml.config">




The above piece of code makes sure that the &project1 and &project2 'placeholders' are replaced with the content of the project1_ccnet.xml.config and project2_ccnet.xml.config files.

I just saw that CruiseControl.NET 1.4 has a new approach to accomplish this, however, I haven't tried it yet.

The CC.NET config file

The CC.NET config file is in fact very simple. You just have to put the Tasks that you've defined in your MSBuild file in the CC.NET config file.
Your CC.NET config file could look like this:

The above configuration file is by all means not complete; I've kept it simple, and left out some tasks. However, you should have an idea :)

MSBuild doesn't support my sln file format

The reason why I specify which executable must be used by msbuild, is very simple:
My project is written in VS.NET 2008, but targets the .NET 2.0 framework. So, by default, CC.NET will use the MSBuild program that has been delivered with the .NET 2.0 framework.
This results in an error: MSBuild doesn't recognize the VS.NET 2008 solution file format, and will stop with this error:

Solution file error MSB5014: File format version is not recognized. MSBuild can only read solution files between versions 7.0 and 9.0, inclusive.

This is offcourse due to the fact that the MSBuild that is used by VS.NET 2005 doesn't know anything about the solution file format that is used by VS.NET 2008.
You can solve this issue by specifying that CC.NET should use the MSBuild executable that can be found in the directory of the .NET 3.5 framework.

The MSBuild XmlLogger Issue

It is possible that CruiseControl.NET will not be able to execute your project, because CC.NET can't find an appropriate XmlLogger.
In this case, you'll find the following error in the CC.NET logfile:
Cannot create an instance of the logger. Could not load file or assembly 'ThoughtWorks.CruiseControl.MsBuild.dll' or one of its dependencies. The system cannot find the file specified.

You can solve this problem by placing the XmlLogger for MSBuild (you can find the dll here in your project working directory.