Copy Source As Html for Visual Studio 2008

I have been "stuck" in Visual Studio 2005 for a while, but when we now jumped to Visual Studio 2008 recently, I missed a tool I’ve been using to blog my code. As the heading states, this is Copy Source As HTML. They have not released an updated version, but Guy Burstein comes to the rescue with an updated addin.

Reading the comments it can seem to be some different "opinions" about where to put it, but I just made an Addin directory under Visual Studio 2008 in My Documents in Windows XP and extracted it there.

//And it works like a charm!

No Comments

Norwegian Developer Conference 2008 Review

ndc2008Logo_thumb

I arrived Tuesday at NDC2008 full of anticipation and excitement; there were a lot of great talks scheduled as I could see it, and I had trouble choosing which ones to attend. I almost immediately found some old colleagues and class mates, which I hadn’t talked to in several years. That was really an added bonus, and I really appreciated the little "reunions".

Day 1

Scott Hanselman started the show with a keynote, showing us a little LINQ and the new Dynamic Data-bits. Hanselman was witty, and was a great presenter. There might have been a couple of things that did go to fast if you hadn’t seen a lot of .NET 3.5 before, but I guess most got at least a glimpse of what it can do.

After the keynote I was considering several sessions, but I decided to attend Mary Poppendiecks first session titled Thrashing. She went through the reasons for them, and what can be done to remedy it. As a reader of the Mythical Man Month, Slack, Peopleware, and others, I found she conveyed a lot of the same information found there, and I really share their views. A new aspect I hadn’t thought of before was queuing theory, which we apply consciously to hardware and related problems, but seldom to team and people dynamics. I will make a follow up post on the matter.

I’ve lately dabbled with some reflection, so next I attended Roy Osheroves talk Deep Reflection, hoping it would be as deep as promised (level 400 session). It certainly was, and I’m glad I’ve recently been looking at both Reflection.Emit and CodeDom-programming. It also helps to extensively take advantage of the vanilla reflection utilities regularly. This was a prerequisite, but it seemed like a lot of eyes glazed over when it was presented. He ended the session with a song, and I think he did his presentation on this heavy topic in a great way.

Supposed to be doing a talk about agility in Typemock (the firm), I gave Roys next session a chance. But the agenda had changed and we were introduced to Designing for Testability. I had this part mostly under control, so I was a bit disappointed that the original talk was exchanged. It was an introduction to IoC, DI, and IoC-containers, as well as our options when designing for testability with mocks or subclassing. This session ended in a song as well, and the lyrics was funny as always.

There was unfortunately another change in the agenda, Roy had originally a Threading-talk I’d like to see, but it was changed to a Testing your data access layer session. With this change, I attended Mary Poppendiecks talk on The Role of Leadership in Lean Software Development. Contrary to popular belief in most Agile circles, she thinks there is a place for leaders, not only self-organizing teams. I must admit that this is something I’ve personally experienced as well; when everyone is responsible, no one takes responsibility. I won’t go into more detail here, but I think it was a great talk, and she definitively hit home many points with me.

Day 2

I start out attending an Agile Panel discussion hosted by Scott Hanselman, featuring Mary Poppendieck, Roy Osherove, Ken Schwaber, Chet Hendrickson, and Ron Jeffries. An example topic was what are the first steps to become agile. It wasn’t that much of a discussion really, as all the panelists believe in the Agile values.

The next two sessions I followed the Agile crowd in general, and Jeffries & Hendrickson in particular in their first two talks about Natural Laws of Agile Software Development. They presented the same material I saw from Smidig (Agile) 2007 on the economics of releasing early. I think it shows the potential payoff of releasing early, but it misses some aspects of going to early into maintenance mode with the software. I think this has to be explored some more. After showing these teasers, they went more into how early and frequent releases can be done baking quality into the process through the means of TDD and Acceptance Tests.

While I was humming along with Ron & Chet, it seemed like Roy got quite a following. It was almost impossible to get a seat on his Advanced Unit Testing session. It really seems my fellow Norwegians are good & ready for some ALT.NET techniques & practices, especially unit testing. I eventually got a seat on the session, but I must admin I personally was a little bit disappointed as I’ve already been down most of the roads before. Hopefully it was another teaser for all those who are thinking of getting into the whole unit testing business.

Next up, I attended Mads Torgersens Microsoft LINQ Under the Covers: An In-Depth Look at LINQ. And under the covers it was indeed. He gave us a great peek into how a LINQ-expression was disassembled, and showed us the output through Reflector. I must admit it was hard to follow everything, but I was at least familiar with all the constructs. All in all a mindblowing experience, and Mads gets credit for his enthusiasm during the session.

Finally, I attended Mary Poppendiecks session on The Discipline of Going Fast. We got new insights into the Toyota Way, a little bit of history, and specifically the Stop-the-Line practice. I definitively will continue this flirt the Lean methodologies.

Conclusion

I’m very pleased, and I was exhausted after two days packed with great content. The only thing I have a complaint about is that a couple of Roys talks should have been moved to accommodate the massive interest his topics achieved.

For another review from a fellow Norwegian blogger look at Fredriks post. You may also see more pictures from Rune Grothaug, who did an amazing job arranging this as well.

I must thank the hosts for a great event, and I will come back next year!

No Comments

Using the using-statement and pattern in C#

How often do find yourself writing code like this to do some things in batch:

    BatchCalculator calc = new BatchCalculator();
    calc.Suspend();
    calc.CalculateSomething(something);
    calc.CalculateSomethingElse(something);
    calc.Resume();

Well, I do and I’m not really happy about sprinkling those Suspends and Resumes around everywhere I need to start and stop something. I see at least two common pitfalls with thishttp://www.flickr.com/photos/stuckincustoms/894698437/ solution and a minor hiccup:

  1. Somewhere down the pipe I’m bound to forget the call to Resume explicitly and I’ll have a bug on my hands.
  2. Somewhere in between the Suspend and Resume calls an Exception is thrown, Resume is never reached, and leaving the object in an unwanted state.
  3. It could be better looking!  ->

The short using-introduction

With the introduction of .NET and it’s managed environment and non-deterministic garbage collector, there were several figureheads in the industry that raised an eyebrow or two. There were also people raising more than their eyebrows as well, and according to legend and several .NET Rocks shows, Chris Sells (now a blue badge) was one. They allegedly made MS include an IDisposable interface with a simple method Dispose() to fill their garbage collection needs. And if that wasn’t enough, they included the using-statement which is a try-finally in disguise where the finally automatically calls IDisposable.Dispose()!

A couple of regulars in my world in that department are the IDbConnection interface and later (from 2.0 and onwards) the TransactionScope class, but it has also been recommended practice for any implementers of the IDisposable interface.

Yeah yeah, but what can I do with it?

With the aforementioned background in place we can exploit it to create a better and more fluent API for our batch-oriented processes. Let us simply dive into the code, and I introduce without further ado; the changed BatchCalculator:

public class BatchCalculator
{
    public IDisposable Suspend(){}
    public void Resume(){}
    public void CalculateSomethingElse(object something){}
    public void CalculateSomething(object something){}
}

Suspend now returns an IDisposable and we can replace our calling code to this:

    BatchCalculator calc = new BatchCalculator();
    using(calc.Suspend())
    {
        calc.CalculateSomething(something);
        calc.CalculateSomethingElse(something);
    }

Yes! That’s more like it. I definitively like to looks of that.

But how.. do I ensure a call to Resume?

This is where the "magic" happens. Let us make a class which implements IDisposable that gets returned from our Suspend method:

public class Suspender : IDisposable
{
    private readonly BatchCalculator m_calculator;
 
    public Suspender(BatchCalculator calculator)
    {
        m_calculator = calculator;
    }
 
    public void Dispose()
    {
        m_calculator.Resume();
    }
}

And our revised Suspend method:

public IDisposable Suspend()
{
    return new Suspender(this);
}

Now go look at the implemented Dispose-method in our Suspender-class. It just calls our Resume method on our BatchCalculator! So when the using-block is exited, the Dispose-method is called and hooray, mission accomplished.

Finishing touch

To increase the applicability of the Suspender-class I introduce the role interface IResumable:

public interface IResumable
{
    void Resume();
}

And implement it in BatchCalculculator:

public class BatchCalculator : IResumable
{
    public IDisposable Suspend()
    {
        return new Suspender(this);
    }
    public void Resume(){}
    public void CalculateSomethingElse(object something){}
    public void CalculateSomething(object something){}
}

Now the Suspender class can just wrap our new interface:

public class Suspender : IDisposable
{
    private readonly IResumable m_resumable;
 
    public Suspender(IResumable resumable)
    {
        m_resumable = resumable;
    }
 
    public void Dispose()
    {
        m_resumable.Resume();
    }
}

Final note

If we revisit our weak spots, have we solved them all? A definitive yes; using using guarantees that the Dispose()-method is called which in turn calls our wrapped method. I must also add I really like the syntactic sugar using represents.

This pattern is obviously at a tangent for what using and IDisposable was supposed to be used for. The MSDN library has this to say about IDisposable:

The primary use of this interface is to release unmanaged resources.

But why not leverage what we have available. After all, coding is done once. Reading it is another matter completely.

2 Comments

Looking forward to Norwegian Developer Conference 2008

ndc2008Logo_thumb I’m happy to announce I’m going to NDC2008 in Oslo. It seems like it’s going to be two days packed full of everything a developer could want, I wish I could do Haugern.Clone() a couple of times.

I haven’t quite decided yet what I will observe first-hand, but whatever I choose, I’m sure it’ll be interesting!

No Comments

Get those Guards up

I often find code which have several nested conditionals like this:

public void FunctionWithoutGuardClauses(Person person)

{

    if (person != null)

    {

        if (person.Phone != null)

        {

            person.Phone.Call();

        }

        else

        {

            throw new ArgumentNullException();

        }

    }

    else

    {

        throw new ArgumentNullException();

    }

}knight4

According to Steve McConnell in Code Complete 2 it is just fine with regards to putting the most probable clause first. But it is also an anti-pattern in this case; arrow code. With even more nesting it gets worse, and it starts to get nightmarish to follow the possible execution paths.

Oh, and see how I blatantly disregard the Law of Demeter!

Introducing Guard Clauses

A guard clause is a conditional which constitutes our first line of defence in our methods. It can break control flow in an elegant matter, so the only thing left to follow is the valid execution path.

There is also nothing wrong with more than one return path in our methods, as long as it is as clean as it gets with guard clauses. Atwood seems to think so as well, so I’m really home safe here ;-p

So lets see how the above code can be refactored with guard clauses:

public void FunctionWithGuardClauses(Person person)

{

    if (person == null || person.Phone == null)

        throw new ArgumentNullException();

 

    person.Phone.Call();

}

So what are you waiting for, get those guards up and tear those nested nightmares apart!

No Comments

Music with capital M

Growing up with the C64 and the SID, a lot of good feelings emerge when the small tunes reach my ears. I have also fond memories of my first PC and an imported Gravis UltraSound in the early 90’s and the demoscene back then.

So when a coworker introduced me to Slay Radio, I just had to spread the love. I have previously been an avid listener to Nectarine Demoscene Radio, so now I suddenly have a luxurious problem on my hands; to which do I lend my ears?

Well, enough of my hard choices. Get ready for endless hours with nostalgic musical pearls and good feelings all around and tune in to either Slay Radio or Nectarine now!

No Comments

My take on Software 2008, the agile methods in practice seminar

I’ve just returned from an arrangement hosted by The Norwegian Computer Society, called Software 2008. I attended the “Agile methods in practice – What is it and how to do it?” full day seminar. (NB: The last two links are only available in Norwegian!).

Here’s a short review of the day.

First quarter

The first talk was given by Aslak Hellesøy. He gave a general and a bit historical view of the agile methodologies. He spoke well, and had good anecdotes in his speech. I liked in particular the little story about how cargo cults came about, and how we see them all over the place today in the software industry.

The second talk was given by Trond Pedersen and Nils Christian Haugen. They had an original take on their speech, it wasn’t even a speech; it was a role play. They guided us through some typical scenarios in a software development project and showed the agile aspects of it. I think it was a bit contrived, but a think they got across some good points.

Second quarter

The “long” sessions are now history, and we’re given a serious of Lightning Talks. The topics ranged from how “GUI prototypes are evil” to “how to delete production code”. The two talks I enjoyed the most was from Kaare Nilsen, and the one from Trond Wingård. The first because of his charismatic appearance, and the second because of its great topic on present value effects with incremental delivery in agile projects. This could be a real eye opener to just about anyone.

Third quarter

The seminar crowd paired up with each other and interviewed each other to form mind maps. The topic was how we could introduce agile methods in our own organizations.

After a slow start with some standard awkwardness we managed to get some drawings and text down on paper. It was an ok exercise, but I’m not really sure I have any real use for it.

Fourth quarter

Here most of us was introduced to a concept called “open spaces“. I have heard of it before, but have not been a participant in such an endeavour. A range of topics from the days agenda was thrown on a flip over and we sat in on which topic we would find most interesting. Well, you could either have an “engaged foot”, or an “interested foot”, or both to participate in any given topic. But if you found yourself twiddling your thumbs, you just wandered off to another space to see if you would get any “feet” there.

I sat in on a topic about what agile practices does and doesn’t work, and I think we had some interesting discussions and got to poke a bit into the material. The session was a bit on the short side, but I certainly grew fond of the concept.

Summary

All in all, it was a good day, and I got some good input which I will take with me to my own organisation.

I liked the discussions and mingling the most, and I’m definitively going to engage more in such events.

1 Comment

Remember to check out PartCover

Bil Simser just wrote about the lack of coverage tools for .NET in this article, and I agree with his points.

In our current project we’re using NCover and NCoverExplorer (older editions), which are working just fine as we’re still in the .NET 2.0 world. It seems there are some problems with the older versions (they have gone commercial on us) and the new .NET 3.5 framework.

This isn’t particularly good news, as we’re going to utilize VS 2008 and .NET 3.5 shortly. The choice is wether fleshing out 150 USD, or find an alternative. One of the comments on Bils post linked to another coverage utility; PartCover.

So, note to self: Check out PartCover when we’re going .NET 3.5.

No Comments

SQL Server CE up and running with NHibernate & NAnt

I’ve been working to get SQL CE to work with our application lately. This is a great step towards a more painless desktop deployment model for us.

It turned out to be a multi step process, and first out was discovering how to make a SQL CE file.

Creating the SQL Server CE .sdf file

There are two alternatives as I see it:

  1. Create an empty database with ie SQL Enterprise Manager, and copy it for new deployment
  2. Create it programmatically at runtime.

We already have a solution folder and a setup for sql-script templates, so we went with #1.

Using NHibernate hbm2ddl NAnt task for the schema

NHibernate has built in support for SQL Server CE, and what you have to do is the following:

  1. Set the connection.connection_string property.
  2. Set the connection.driver_class property.
  3. Set the dialect property.

Our current NAnt build file has had support for SQL Express 2005 & SQL Developer Edition 2005, and our schema target have been doing a great job at creating the proper schema. The hbm2ddl task has been called with the following attributes:

<hbm2ddl
   connectionstring="${sql.nhibernate.connection}"
   droponly="false"
   exportonly="true">
   <assemblies>
      <include name="${directory.build}\ISY.Domain.dll"></include>
      <include name="${directory.build}\ISY.Infrastructure.dll"></include>
      <include name="${directory.build}\ISY.Repository.dll"></include>
   </assemblies>
</hbm2ddl>

So I go ahead and set the connectionstring attribute with the correct settings. And it fails miserably:

NHibernate.HibernateException: An error has occurred while establishing a connection to the server.  When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) —> System.Data.SqlClient.SqlException: An error has occurred while establishing a connection to the server.  When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified)

 

Ok, that’s not good. It seems it cannot connect to the server. Checking the ins and outs of the my sql.nhibernate.connection property, I find that it should be pointing to the right file. After some investigation I find out that I miss #2 & #3; setting the correct dialect and driver_class. The hbm2ddl task had default values for dialect and driver_class which works great with both our previously two supported databases. So, I set them both like this:

<hbm2ddl
  connectionstring="${sql.nhibernate.connection}"
  dialect="${sql.dialect}"
  connectiondriverclass="${sql.driver}"
  droponly="false"
  exportonly="true">
  <assemblies>
    <include name="${directory.build}\ISY.Domain.dll"></include>
    <include name="${directory.build}\ISY.Infrastructure.dll"></include>
    <include name="${directory.build}\ISY.Repository.dll"></include>
  </assemblies>
</hbm2ddl>

Where:

<property name="sql.driver" value="NHibernate.Driver.SqlServerCeDriver" />
<property name="sql.dialect" value="NHibernate.Dialect.MsSqlCeDialect"/>

And we’re there with the hbm2ddl.

PS! Don’t forget to add the SQL CE dll into your folder with NAnt, or else it can’t find the right driver with the above settings (an easy to understand error message will tell you).

Populate the database with default data

After the schema is in place, we populate some tables with default system data from sql script files. For this we use the NAntContrib sql task like this:

<sql
  connstring="${sql.connectionstring}"
  delimiter="GO"
  delimstyle="Line"
  source="${target}"
  transaction="${sql.usetransactions}">
</sql>

First, the sql.connectionstring needs to reflect that I am talking to a SQL CE database, and ConnectionStrings.com comes to the rescue once more.

The first obstacle when running the first sql script file is this error:

Error while executing SQL statement.
    There was an error parsing the query. [Token line number,Token line offset,,Token in error,,]

Yes, very enlightening indeed! (Note to self: remember good error messages!). In my search for answers, this seems to be “the be all, know it all” error message.

Finally, it dawned upon me; SQL CE doesn’t support stored procs. And for that reason, why would it understand a script which practically could be a stored proc? Luckily, all our default data is inserted with “ordinary” statements, and the sql task does have an attribute called batch which defaults to true. With SQL CE you need to set it to false and you’re good to go with this setup (where the sql.batch property depends on which database is being used):

<sql
  connstring="${sql.connectionstring}"
  delimiter="GO"
  delimstyle="Line"
  source="${target}"
  batch="${sql.batch}"
  transaction="${sql.usetransactions}">
</sql>

 

Conclusion and further work

All in all, the effort to investigate and include the SQL CE to our array of supported databases is well worth. It will make a one off deployment issues a lot easier with no extra installation, and it is a champ when it comes to our database unit testing and integration testing.

What’s still missing is a script which populates our database with test data. The existing script is not easily split into single statements. We’ll dig into it and I’ll explain it all in a later post!

2 Comments

Automatic versioning with CruiseControl.Net and NAnt

I’ve created a similar setup as David Donald Belcham (a.k.a. igloocoder) mentions in this post for our current project at NOIS.

When I mean similar, we’re using the exact same tools for the job, with NAnt (asminfo task) & CC.NET. But there’s a couple of differences.

New labeller for CC.NET

The first, and minor difference is the labeller. I wanted the versions to work almost as the NAntContrib version task with the automatic build & revision based on day of year and time of day respectively. As most people were running the traditional ymmdd-format for the build, this effectively had to stop at the beginning of 07; 16 bit for the build number breaks that setup. So I settled for an algorithm which won’t break until the year 2066. The major & minor numbers is set manually.

I couldn’t seem to find such a labeller out there, and as I wanted the same number in the CC.NET build-number, and in the assemblies (and in the source control system as well, I’ll get back to that), CC.NET needed to be master. Not finding what I wanted, I created my own labeller for CC.NET, which gives me the build-numbering scheme I wanted.

With the extensibility-points in CC.NET it was great fun, and real easy to deploy my own. Here’s the whole thing:

using System;

using Exortech.NetReflector;

using ThoughtWorks.CruiseControl.Core;

 

namespace Haugern.Util.CCNet

{

    [ReflectorType("versionLabeller")]

    public class VersionLabeller : ILabeller

    {

        [ReflectorProperty("major", Required = true)]

        public int Major;

        [ReflectorProperty("minor", Required = true)]

        public int Minor;

 

        public string Generate(IIntegrationResult integrationResult)

        {

            if(integrationResult == null)

                throw new ArgumentNullException(“integrationResult”);

 

            DateTime now = DateTime.Now;

            return Major + “.” + Minor + “.” + ComputeBuild(now) + “.” + ComputeRelease(now);

        }

 

        private static string ComputeRelease(DateTime now)

        {

            return Math.Round((now.TimeOfDay.TotalSeconds/10)).ToString();

        }

 

        private static string ComputeBuild(DateTime now)

        {

            return now.ToString(“yy”) + now.DayOfYear.ToString(“000″);

        }

 

        public void Run(IIntegrationResult result)

        {

            result.Label = Generate(result);

        }

    }

}

Consistent Build Number

The second difference (well, in his defence, it’s not even mentioned), is that the exact same label should be set back into the source control system. In our case, we’re using TFS Source Control, and with the standard plugin it was easy to apply it there as well.

Now we got this great setup where we can track everything that happens back to the build-number as it is the same across the whole environment:

Application <-> Build server <-> Source Control

Consistent build number across the production line also means every build is a candidate for release.

Should some bug find it way through our thorough unit-testing & qa-scheme into the hands of our customers, finding the right version to search for it will not be a problem!

4 Comments