dinsdag 2 september 2008

Google Chrome

The beta-version of Google's Chrome browser is available for download. I've just installed it, and I like it.
I love their new 'start-page' concept, for instance.

They've also created a comic where they explain the concepts and techniques of Chrome. Very interesting as well. :)

maandag 25 augustus 2008

On reading books ....

Davy Brion has made a statement on his blogs in where he states that reading certain development books should be considered as an investment for a software developer.

I fully support that statement.
I have a few books on my shelf that I consider as 'my Software Development Bibles'. These books have -imho- sharpened my skills, broadened my view and helped me to be a better developer.
These books -which I consider to be my bibles- are, in no particular order:


There are lots of other books on software development on my shelf, but I consider the ones above as the ones that have influenced me most.
Books, you can't get enough of them (you still have to read them as well offcourse). There are still some books regarding software-development on my whishlist, and I'm sure that every now and then, another book will be added to it...

dinsdag 19 augustus 2008

Locking system with aspect oriented programming

Intro

A few months ago, I had to implement a 'locking system' at work.
I will not elaborate to much on this system, but it's intention is that users can prevent that certain properties of certain entities are updated automatically;
The software-system in where I had to implement this functionality, keeps a large database up-to-date by processing and importing lots of data-files that we receive from external sources.
Because of that, in certain circumstances, users want to avoid that data that they've manually changed or corrected, gets overwritten with wrong information next time a file is processed.

The application where I'm talking about, makes heavy use of DataSets and I've been able to create a rather elegant solution for it.
At the same time, I've also been thinking on how I could solve this same problem in a system that is built around POCO's instead of Datasets, and that's what this post will be all about. :)

Enter Aspects

When the idea of implementing such a system first crossed my mind, I already realized that Aspects Oriented Programming could be very helpfull to solve this problem.

A while ago, I already played with Aspect Oriented Programming using Spring.NET.
AOP was very nice and interesing, but I found the runtime-weaving a big drawback. Making use of runtime weaving meant that you could not directly create an instance using it's constructor.

So, instead of:

MyClass c = new MyClass();
you had to instantiate instances via a proxyfactory:
ProxyFactory f = new ProxyFactory (new TestClass());

f.AddAdvice (new MethodInvocationLoggingAdvice());

ITest t = (ITest)f.GetProxy();

I am sure that you agree that this is quite a hassle, just to create a simple instance. (Yes, I know, offcourse you can make abstraction of this by making use of a Factory...).

Recently however, I bumped at an article on Patrick De Boeck's weblog, where he was talking about PostSharp.
PostSharp is an aspect weaver for .NET which weaves at compile-time!
This means that the drawback that I just described when you make use of runtime-weaving has disappeared.
So, I no longer had excuses to start implementing a similar locking system for POCO's.

Bring it on

I like the idea of Test-Driven-Development, so I started out with writing a first simple test:

The advantage of writing your test first, is that you start thinking on how the interface of our class should look like.

This first test tells us that our class should have a Lock and an IsLocked method.
The purpose of the Lock method is to put a 'lock' on a certain property, so that we can avoid that this property is modified at run-time.
The IsLocked method is there to inform us whether a property is locked or not.

To define this contract, I've created an interface ILockable which contains these 2 methods.
In order to get this first test working, I've created an abstract class LockableEntity which inherits from one of my base entity-classes implements this interface.
This LockableEntity class looks like this:


This is not sufficient to get a green bar on my first test, since I still need an AuditablePerson class:

These pieces of code are sufficient to make my first test pass, so I continued with writing a second test:

As you can see, in this test-case I define that it should be possible to unlock a property. Unlocking a property means that the value of that property can be modified by the user at runtime.
To implement this simple functionality, it was sufficient to just add an UnLock method to the LockableEntity class:

.

Simple, but now, a more challenging feature is coming up.

Now, we can already 'lock' and 'unlock' properties, but there is nothing that really prevents us from changing a locked property.
It's about time to tackle this problem and therefore, I've written the following test:

Running this test obviously gives a red bar, since we haven't implemented any logic yet.
The most simple way to implement this functionality, would be to check in the setter of the Name property whether there exists a lock on this property or not.
If a lock exists, we should not change the value of the property, otherwise we allow the change.
I think that this is a fine opportunity to use aspects.

Creating the Lockable Aspect

As I've mentionned earlier, I have used PostSharp to create the aspects. Once you've downloaded and installed PostSharp, you can create an aspect rather easy.

There is plenty of documentation to be found on the PostSharp site, so I'm not going to elaborate here on the 'getting started' aspect (no pun intended).

Instead, I'll directly dive into the Lockable aspect that I've created.

This is how the definition of the class that defines the aspect looks like:

Perhaps I should first elaborate a bit on how I would like to use this Lockable aspect.

I'd like to be able to decorate the properties of a class that should be 'lockable' with an attribute. Like this:

Decorating a property with the Lockable attribute, means that the user should be able to 'lock' this property. That is, prevent that it gets changed after it has been locked.
To be able to implement this, I've created a class which inherits from the OnMethodInvocationAspect class (which eventually inherits from Attribute).

Why did I choose this class to inherit from?
Well, because there exists no OnPropertyInvocation class or whatsoever.

As you probably know, the getters and setters of a property are actually implemented as get_ and set_ methods, so it is perfectly possible to use the OnMethodInvocationAspect class to add extra 'concerns' to the property.

This extra functionality is written in the OnInvocation method that I've overriden in the LockableAttribute class.

In fact, it does nothing more then checking whether we're in the setter method of the property, and if we are, check whether there exists a lock on the property.
If there exists a lock, we won't allow the property-value to be changed. Otherwise, we just make sure that the implementation of the property itself is called.
The implementation looks like this:

Here, you can see that we use reflection to determine whether we're in the setter-method or in the getter-method of the property; we're only interested if this property is locked if we're about to change the value of the property.

Next, we need to get the name of the property for which we're entering the setter method. This is done via the GetPropertyForSetterMethod method which uses reflection as well to get the PropertyInfo object for the given setter-method.

Once this has been done, I can use the IsLocked method to check whether this property is locked or not.

Note that I haven't checked whether the conversion from eventArgs.Delegate.Target to ILockable has succeeded or not. More on that later ...

When the property is locked, I call the OnAttemptToModifyLockedProperty method (which is declared in ILockable), and which just raises the LockedPropertyChangeAttempt event (also declared in the ILockable interface). By doing so, the programmer can decide what should happen when someone / something attempts to change a locked property. This gives a bit more control to the programmer and is much more flexible then throwing an exception.

When the property is not locked, we let the setter-method execute.

With the creation of this aspect, our third test finally gives a green bar.

Compile time Validation

As I've said a bit earlier, I haven't checked in the OnInvocation method whether the Target really implemented the ILockable interface before I called methods of the ILockable type.

The reason for this , is quite simple: the OnMethodInvocationAspect class has a method CompileTimeValidate which you can override to add compile-time validation logic (hm, obvious).

I made use of this to check whether the types where I've applied the Lockable attribute really are ILockable types:


Note that it should be possible to make this code more concise, but I could not just call method.DeclaringType.GetInterface("ILockable") since that gave a NotImplementedException while compiling. Strange, but true

Now, when I use the Lockable attribute on a type which is not ILockable, I'll get the following compiler errors:

Pretty neat, huh ?
Now, what's left is a way to persist the locks in a datastore, but that will be a story for some other time ...

maandag 28 juli 2008

NHibernate in a remoting / WCF scenario

I am thinking on how I could use NHibernate in a remoting scenario (using .NET remoting, webservices, WCF ... ), but I can already see some problems which I will likely encounter on my path.

This is how I see the big picture of the application:



Let me explain it in short:
The client application (a rich Windows client for instance) communicates via some kind of technique, be it WCF or the old .NET remoting, with the Service Layer.
This means that the client application calls a (remote) method on the Service Layer to retrieve a Customer for instance. The client can make some changes to that object and later, the client can call the remote 'SaveCustomer' method so that the Service Layer can persist the changes back to the datastore.
In order to do this, the Service Layer uses a Repository that uses NHibernate to retrieve or persist objects.
Note that the Client Application and the Remote service layer use the same Domain Entities. This means that the domain classes need to be [Serializable].

The problem that I will be facing is this:
- Since (N)Hibernate uses its ISession as a UnitOfWork, which keeps track of the objects that have been created, deleted, inserted, the Client Application doesn't know whether it is necessary to perform a remote call to save the entity or not.
(The client application doesn't know anything of some thing called an 'NHibernate Session', and my business object (entity) has no state tracking as well. (In other words: my entity itself doesn't know whether it has been created, changed or deleted).

- The remote method which will save my entity, will use another ISession then the method that has retrieved it. (Remote methods should be stateless, since multiple callers can call the same method. Client x should not know anything of client Y).
The fact that the 'SaveCustomer' method will use another ISession, means that it is possible that NHibernate will perform unnecessary UPDATE statements. This could be problematic if you use an AuditInterceptor, since this Interceptor will update the LastUpdated, Version, etc... columns in the DB, while this was not necessary. In other words: this leads to wrong information in the database.

How could these problems be tackled:
- For the first problem, you could implement some kind of 'state tracking' in your entities, and add a property which tells you whether the entity has modified , etc...

- Implementing state-tracking in your domain entities may also solve the 2nd problem; in your repository you can check whether you've to Update or Save (for new entities) your entity. However, I don't know yet how this will behave in situations where an entity contains a collection of other entities ...


I'd like to know from other people how they have tackled these kind of problems ? Did you implement some kind of state tracking in your business entities ?
Or, did you choose not to expose your business entities to the client application, and use Data Transfer Objects instead ? If so, how did you map these DTO's to your business classes ?

zaterdag 5 juli 2008

NHibernate IInterceptor: an AuditInterceptor

As I was playing around with NHibernate today, I came accross a rather inconvenient problem. :).

Let me first explain what I wanted to achieve:
For every domain object that I save, I want to persist in the database when the entity has been created, when it has been last updated and by whom. Nothing special, just regular audit-information.

To make this all possible, I've created the following classes / interfaces:

  • IAuditable interface




  • AuditableEntity interface



I think this is pretty straightforward and doesn't require any further explanation.
Then, I continued with creating an NHibernate interceptor which would set the Created and Updated dates. (I could also used the ILifecycle interface instead, but this meant that I would have a dependency to the NHibernate assembly in my 'domain classes assembly', and I don't like that. In fact, the ILifecycle interface has been deprecated for exactly that reason).

This is an extract from my AuditInterceptor which would perform the task I wanted (at least, I thought so ... ).
(Note that my AuditInterceptor is NOT in the same assembly where the IAuditable, AuditableEntity and other domain base class reside in. This would create a dependency from my base classes to NHibernate and again, I hate this :) ).

The AuditInterceptor (snippet):


As you can see, it is very simple: I only had to implement 2 methods of the IInterceptor interface:

  • OnSave, which is called when an entity is saved for the first time in the database (INSERT)

  • OnFlushDirty, which is called when an existing entity is dirty and has to be updated
What I do, is check whether the entity that is to be saved implements the IAuditable interface, and if so, I just set the necessary properties (Created and Updated) to the appropriate values (the current DateTime).

Easy enough, simple, understandable and clean... If only this would work...
During testing, I got the following exception:

  ----> System.Data.SqlTypes.SqlTypeException : SqlDateTime overflow. 
Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.
at NHibernate.Persister.Entity.AbstractEntityPersister.Insert(Object[] fields,
Boolean[] notNull, SqlCommandInfo sql, Object obj, ISessionImplementor session)

As it turns out, NHibernate doesn't 'see' the changes you make to the entity parameter that is passed to the Interceptor methods:



You can however, change the values that are in the state array parameter. Then NHibernate will correctly persist the changes.

But, I do not like to 'hard-code' property names as strings for obvious reasons (if you change a property, the compiler will not detect that you should change your 'hardcoded property name string', etc...).

Anyway, in order to get my interceptor to work, I have no other choice then messing around with the propertyNames[] and state[] parameters.
In order to get rid of the 'weak-typing', I added a little bit more code.
So, now my classes look like this:

  • IAuditable interface



  • AuditableEntity class



  • AuditInterceptor


This solution is, IMHO, elegant enough to live with, and it works.

However, maybe someone else has a better, more elegant solution for this ? If so, I'd like to hear from you ...

dinsdag 1 juli 2008

NHibernate Session Management

I know that there has been written a lot about this topic, but somehow, I haven't found the 'sweet spot' concerning NHibernate Session Management in WinForms applications yet.

Some time ago, I've created a simple abstraction around the NHibernate ISession which would make it easier to use the ISession in my Winforms application.

Why do I want to clutter my presentation layer with NHibernate stuff, you ask ? Because Context is King.
The Repository has no notion of transactions, since the Repository doesn't know the context in where it's used.
Therefore, I like to start my Transaction in my WinForm app for instance, and pass the 'Transaction' to my repository, like this:



In the code above, the UnitOfWork class is just a simple wrapper around the NHibernate ISession which allows me to start and commit or rollback a transaction, disconnect the ISession from the Database, etc... with a minimum amount of code.

The UnitOfWork class looks like this:

This approach also allows me to have multiple NHibernate ISessions opened in one application instance.
This approach also gives me full control about when to start a new UnitOfWork, and when to close a UnitOfWork.

I've been convinced that this was the way to go. Especially because I thought that you had to commit the changes you've made to an object using the same ISession as the ISession which you've used to retrieve the object if you want to avoid unnecessary SELECT statements.

But, thanks to my collegue Thierry (who's starting to use NHibernate as well, and acted as some kind of catalysator to me so that I picked up my NHibernate quest again), it seems that my assumptions where not true:
I thought that, when you save an object to the datastore, using another ISession then the ISession you've used to retrieve the object, NHibernate would first perform a SELECT query in order to find out whether an INSERT or an UPDATE statement should be executed.
This seems to be false if you do not use the 'assigned' generator class for your Id property.

So, now I'm in doubt:


  • do I really need to be able to have concurrent ISessions in the same application instance ? Until now, I haven't needed it yet (so, yes, that makes it a YAGNI in fact).

  • I haven't seen anyone on the net using a similar approach. I see that everyone uses some kind of 'SessionManager' like the one Billy McCafferty has written here, so this makes me doubt as well ...

This last point is also the reason for this blogpost: I'm in doubt :)
Using some kind of 'SessionManager' class allows me to do the transaction demarcation where ever I want as well. Next to that, I also do not have to pass my UnitOfWork to the repository, since the repository has access to the current Session via the SessionManager as well ...

I know that, maybe, I should just give it a try. However, I'd like to hear experiences and thoughts of other people who are using (N)Hibernate in a Rich Client environment as well.
How are you dealing with those (session management) issues ? What difficulties did you encounter ?



Note: another post of me regarding this subject can be found here

assumptions are the mother of all fuckups.

maandag 30 juni 2008

New Layout

I've changed the layout of my weblog, I hope you like it.

If you have any remarks regarding the layout, if you don't find it readable, if you miss something, please let me know.

vrijdag 13 juni 2008

Setting Up Continuous Integration
Part II: configuring CruiseControl.NET

Now that we've created our buildscript in part I, it's time to set up the configuration file for CruiseControl.NET


The ccnet.config file and multiple project-configurations

The tasks that CruiseControl.NET should execute for your project, are configured in the ccnet.config file.
The ccnet.config file can contain multiple project configuration blocks. However, I like to have each project-configuration in it's own, separate file. In my opinion, this is much more manageable.

In order to put each project-configuration in its own XML file and import it in the ccnet.config file, you can make use of DTD entities to substitute constants with the contents of other XML files.
This is how I've done it:

<!DOCTYPE cruisecontrol [
<!ENTITY project1 SYSTEM "file:D:\folder\project1_ccnet.xml.config">
<!ENTITY project2 SYSTEM "file:D:\folder\project2_ccnet.xml.config">
]>

<cruisecontrol>

&project1;
&project2;

</cruisecontrol>

The above piece of code makes sure that the &project1 and &project2 'placeholders' are replaced with the content of the project1_ccnet.xml.config and project2_ccnet.xml.config files.

I just saw that CruiseControl.NET 1.4 has a new approach to accomplish this, however, I haven't tried it yet.

The CC.NET config file

The CC.NET config file is in fact very simple. You just have to put the Tasks that you've defined in your MSBuild file in the CC.NET config file.
Your CC.NET config file could look like this:

The above configuration file is by all means not complete; I've kept it simple, and left out some tasks. However, you should have an idea :)

MSBuild doesn't support my sln file format

The reason why I specify which executable must be used by msbuild, is very simple:
My project is written in VS.NET 2008, but targets the .NET 2.0 framework. So, by default, CC.NET will use the MSBuild program that has been delivered with the .NET 2.0 framework.
This results in an error: MSBuild doesn't recognize the VS.NET 2008 solution file format, and will stop with this error:

Solution file error MSB5014: File format version is not recognized. MSBuild can only read solution files between versions 7.0 and 9.0, inclusive.

This is offcourse due to the fact that the MSBuild that is used by VS.NET 2005 doesn't know anything about the solution file format that is used by VS.NET 2008.
You can solve this issue by specifying that CC.NET should use the MSBuild executable that can be found in the directory of the .NET 3.5 framework.

The MSBuild XmlLogger Issue

It is possible that CruiseControl.NET will not be able to execute your project, because CC.NET can't find an appropriate XmlLogger.
In this case, you'll find the following error in the CC.NET logfile:
Cannot create an instance of the logger. Could not load file or assembly 'ThoughtWorks.CruiseControl.MsBuild.dll' or one of its dependencies. The system cannot find the file specified.

You can solve this problem by placing the XmlLogger for MSBuild (you can find the dll here in your project working directory.

zaterdag 7 juni 2008

Setting Up a Continuous Integration process using CruiseControl.NET and MSBuild.
Part I: creating the MSBuild build script

Intro

I’ve been struggling lately to get a new project that I’ve started at work, under Continuous Integration.
Although I’ve used CruiseControl.NET & NAnt in my previous project for CI purposes, things didn’t go so smooth now ...

In my current project, I’m using Visual Studio.NET 2008 and targetting .NET 2.0.

Now, I wanted to use MSBuild for the build process and that’s where it all started.

I had to spent some time searching on the Net in order to get everything working like I wanted. It seems that there’s no single source of documentation for MSBuild & CC.NET which addresses all the problems that I’ve encountered.
So, the intention of this article, is to help other people setting up a CC.NET environment with MSBuild, and it will also serve as a reference for me, so that I can grab back to it when needed. :)

Requirements

What I wanted to achieve, is very simple:

I have a build machine where CruiseControl.Net is installed. This machine is already used for another project of mine for which I’m using NAnt for the build process.

The new project that I’ve started, is being developped in VS.NET 2008, targets the .NET 2.0 framework and is under SourceControl via Visual SourceSafe.

I wanted to have a CI process that regularly looks in VSS and, when something has changed, performs the following tasks:


  • Make sure that the latest buildscript will be used

  • Clean the source directory

  • Get the latest version of the codebase out of Visual SourceSafe

  • Build the entire codebase

  • Execute the unit tests that I have using NUnit

  • Perform a statical code analysis using FxCop


The MSBuild build-script

In order to automate all the steps above, I needed to have a build-script first which I can execute using MSBuild.

Such a script is also handy when the application you’re building consists of numerous VS.NET solutions; instead of opening each solution separately in Visual Studio, compiling it, opening the next solution ..., you can build the entire codebase using a single command line.
This is quite handy and productive, I can tell you :)

For every ‘task’ (clean source directory, get latest, build codebase, etc… ) that I want to execute, I’ve created a Target in the build script.

A first little problem I encountered was that MSBuild doesn’t contain any tasks that would allow you to get a latest version out of VSS, run NUnit unit-tests or perform a code analysis with FxCop out of the box.
Fortunately, there exists an open source project called the 'MSBuild Community Tasks Project' which contains additional tasks that can be executed by MSBuild. This means that you don’t need to write your own MSBuild Tasks.

Skeleton of the buildscript

Before creating the Targets, I’ve defined a few properties which I will use in all the tasks:


I define the working directory (where my source can be found) as the builddir, and a directory where the assemblies that have been build should be placed (outputdir).
Next to that, I also have an artifactsdirectory where the results of the unittests and code analysis will be put.

The last line in the above code is necessary so that we can use the additional MSBuild Tasks that can be found in the MSBuild Community Tasks Project.

Now, we can start creating our 'Targets'.

Clean Target

I want to have the possibility to start from a 'clean sheet', so I really need a Target which justs deletes everything that can be found in my builddir and outputdir.
This Target is very simple; you just have to make use of the Delete Task:

Getlatest Target

In order to get the latest version of the source out of SourceSafe, I’ve created the following step:



Here, I just make use of the VssGet Task that is part of the MSBuild Community Tasks project.
Also, notice that this Target depends on the createdirs Target; this means that, when you execute the getlatest Target, the createdirs Target will be executed before the getlatest Target is executed.

The createdirs tasks looks like this:

BuildAll Target

This is the first target where I’ve had some issues, although it’s task is very trivial:
Compile and build everything that can be found in the $(builddir), and make sure that the assemblies that have been built are placed in the $(outputdir).

It seemed very easy to do, since I found out that MSBuild.exe (which is the program I use to compile the code) had a property OutputDir. So, it would be fairly easy to set this property to the $(outputdir) variable.

Alas, to no avail. My assemblies were never copied to the output-directory. Eventually, I discovered that there also exists an OutputPath property, so I tried it. This seemed to work.
So, the buildall Target looks like this:

Offcourse, you can put multiple solution files in the Projects attribute of the MSBuild Task.
You’ll have to separate the sln files with a semicolon.

NUnit Target

This Target is quite simple:

With this Target, I run the NUnit tests that have been written in my test assembly. (I tend to name all my Test-assemblies .Tests.dll).

The results of the unit-tests procedure are placed in the artifacts directory as an XML file. In this way, I can easily incorporate the test-results in my CC.NET report (more on this later).

FxCop Target

This one was a bit cumbersome.
I started out writing this Target with the FxCop task that can be found in the MSBuild Community Project; it looked like this:


The reason why I do not apply the output XSL stylesheet, is very simple: I want CC.NET to display the results on the Dashboard, so CC.NET should read the XML file, and apply the XSL stylesheet.

Now, this Target just worked fine on my development box. However, on my ‘build server’ *ahum* (my previous dev Workstation), I couldn’t get it working.
On the build machine, I constantly kept getting errors.
Apparently, msbuild was trying to locate FxCop in C:\Program Files\Microsoft FxCop 1.32, but I don’t have this old version of FxCop installed.
I’m using FxCop 1.36 beta instead.

Therefore, I eventually opted to put the path where FxCop is installed in my %PATH% environment variable, and decided to use the Exec Task so that I could call the fxcopcmd tool:

In order to keep my build script a bit readable, I’ve created an ItemGroup in where I define all the command-line arguments that I want to pass to FxCopCmd.exe.

By default, the items that are defined in an ItemGroup will be concatenated with a semicolon. This is something I do not wanted offcourse, since command line arguments should be separated by a space.
It is easy to define that the items should be separated by a space:

@(Args, ' ')

There’s a litle sublety with the Exec command however: it doesn’t work well when you have a commandline argument that is a path which contains a space. You should escape such paths with quotes, but I haven’t succeeded in getting it to work with MSBuild yet ...


Executing Targets via MSBuild

Now that we’ve defined all the Targets, we need to see if they work offcourse.
Executing a Target is fairly easy:

Just open up a VS.NET command prompt (or open a regular command prompt and make sure that the path to the MSBuild.exe utility is in your path environment variable), and navigate to the location where your msbuild build-script is located.

Then, you just execute MSBuild, make sure that your build script is used, and tell MSBuild which target he should execute. You can also override the default values of the parameters (like $(outputdir) ) that we’ve defined in our script.

For instance:

msbuild myproject.msbuild /t:buildall /p:outputdir=r:\myproject\release /p:buildmode=release

I think that this is enough text for today.  I will soon post a subsequent article in where I’ll explain how to use this script in CruiseControl.NET.

zondag 20 april 2008

using directives within namespaces

Sometimes, I come across code examples where the programmer puts his using directives within the namespace declaration, like this:

namespace MyNamespace
{
using System;
using System.Data;

using SomeOtherNamespace;

public class MyClass
{
}
}

I am used to put my using directives outside the namespace block (which is no surprise, since VS.NET places them by default outside the namespace declaration when you create a new class):

using System;
using System.Data;

namespace MyNamespace
{
public class MyClass
{
}
}

So, I'm wondering: what are the advantages of placing the using directives within the namespace declaration ?
I've googled a little bit, but I haven't found any clue why I should do it as well. Maybe you'll know a good reason, and can convice me to adapt my VS.NET templates ?

woensdag 12 maart 2008

VS.NET 2008: Form designer not working on Windows Vista

I installed Visual Studio.NET 2008 on my Vista workstation at work. I was keen to work with it, but apparently, my workstation had some issues.

When I started VS.NET 2008, and created a new WinForms project, I received the following error when I wanted to open a Form in the designer:

The Service Microsoft.VisualStudio.Shell.Interop.ISelectionContainer already exists in the service container

ISelectionContainer

I've searched a bit on the Internet, and it seems that I was not the only person who was having this problem.
However, nobody seemed to have a solution for this problem, and according to Microsoft, this problem was not reproducable ...

Eventually, I found a website where someone said you had to install SP1 for .NET 2.0 and SP1 for .NET 3.0.
Unfortunately, these service packs are not supported by Vista.

I was however able to install the updates KB110806 and KB929300 and installing these 2 updates, fixed my problem

maandag 28 januari 2008

Debugging the .NET framework

As I've written earlier, Visual Studio.NET 2008 makes it possible to debug code that can be found on a source server.
I think this can be very interesting and I can think of numerous situations where this can be very handy.
Suppose you work in a company that uses an in-house developped framework, and you're building an application that uses this framework.
If you experience some strange behaviour inside your application, you can debug your code and step into the code of the framework to see if the company's framework has a bug.

I've read that Microsoft has setup a source server which contains the debug-symbols of the .NET framework, so, as from this month, it is possible to step into the .NET framework source-code as well!
Setting up VS.NET 2008 to enable this is very simple; you can find a step-by-step guide here.


The Mozilla team also have a symbol server. You can read more about it here

dinsdag 15 januari 2008

Cannot open log for source {0} on Windows 2003 Server

I am writing an application which uses some .NET remote components that are hosted in IIS on a Windows 2003 Server.
When the remote component throws an exception, the exception information should be written to the EventLog on the Windows 2003 Server; however, Win2k3 seems to be a bit restrictive when a component that is hosted in IIS wants to write to the eventlog.
Although the component does not run under the IIS_WPG or ASPNET account (I am using Windows Impersonation), I always received the following exception when the .NET remote component wanted to write something to the eventlog:

Cannot open log for source {0}. You may not have write access. Access is denied

You can get rid of this behaviour and make sure that the error is indeed written to the EventLog by following the steps below:

  • Open the registry on the Win2k3 server using regedit
  • Locate the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog\Application key
  • Find the CustomSD key and append the following string to the existing value: (A;;0x0002;;;AU)

Now, the (impersonated) remote component should have rights to write to the EventLog.

Now, what is the meaning of the string you've just added to the CustomSD key ?