Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

2009-02-06

Silent errors

I'm working on an app which uses a 3rd party library for producing SWF and FLV files. For some reason the trial worked perfectly but when I switched my app to the full version there was no audio output.

We'd been looking at this problem for a while, emailing support etc, but just couldn't see what was wrong. It wasn't until I went back to my proof of concept app and ran it that we realised the full version did produce audio, it was just my main app that wouldn't work properly. Then I spotted the error...

var compressor = new TVE4();
compressor.LoadSettings(SettingsPath);
compressor.SetOutputFile(outputFileName);
compressor.EncodeSequenceAudio(Composition.EffectiveProductionAudioFileName);
compressor.Key1 = 12345;
compressor.Key2 = 54321;
(loop to encode frames)


Do you see the error? It was only as I switched between the proof of concept code and my app code in the IDE that I noticed the two following lines moving up and down...

compressor.Key1 = 12345;
compressor.Key2 = 54321;
Once I spotted it the problem was obvious! If I don't set my license key before encoding anything (including audio) it is not going to work. Moving the key up a couple of lines fixed the problem. It was a simple absent minded mistake to have made, but why did it take over a day to solve?

There were so many factors involved. We aren't using the "full edition" of the tool we are using a feature restricted version so we thought it might be that for a while. Then we thought it might be our settings files. Then their support department kept talking about missing codecs (which made no sense to be honest.) Then there was the fact that in the app it runs in a thread. All sorts of variables that seemed much more likely than a simple 2 line coding error.

The thing is, the EncodeSequenceAudio method returns a bool to indicate success or failure. I hadn't even spotted this. I am so accustomed to experiencing exceptions that I didn't even think to expect a boolean return type. In addition to this the examples that ship with the product don't check for a result either.

2009-01-28

Perceived speed

I'm writing an app which basically performs the following steps in a wizard-like interface:
  1. Select an audio file of someone speaking + loads a text script in.
  2. Process the audio file and the script, can take about 5 seconds for a minute of audio.
  3. Select a character to use in the animation.
  4. Select some additional graphics and scene settings.
  5. Use the data generated in step #2.
Sitting there for 5 seconds wasn't really a problem, not to process a 1 minute audio file. A longer audio file would take a little longer, but 10-15 seconds is okay, isn't it? Well, what if I could make it take no time at all?

Obviously I can't, but I can make it look like it takes no time at all. The fact is that I don't need the processed data until Step 5. The user will probably spend at least 30 seconds twiddling settings in each Step 3 and 4. How much processing power does a bit of GUI interaction take? Hardly any at all!

So I made Step 2 run in a separate thread. As soon as the user clicks "Next" on step 1 they instantly see Step 3. They spend some time there and move onto Step 4. By this point the processing is probably already complete, if however it isn't I simply disable Step #4's "Next" button with a flashing label at the top of the form "* Processing Lip Sync". A 0.5 second timer keeps checking for the completed flag to be set, when it is the label disappears and the button enables.

Unless the user is rushing through the steps they are unlikely to get held up at all. The processing takes just as long as before (maybe a tad longer), but from the user's perspective it takes no time at all.

2009-01-27

Data Transfer Objects

My observations on data transfer objects

  1. They should not be a one to one representation of your domain classes.
    Sometimes you want a subset of the information of a single instance, sometimes your DTO will collect data from various instances (starting with an aggregate root). You might create different types of DTO from the same domain classes based on what the user needs to do. Sometimes the user only wants OrderNumber + DateRaised + TotalValue (to show as a list of orders), sometimes they want the entire order details (including lines) for editing.

  2. The domain classes should have no knowledge of your DTO classes.
    So you shouldn't have any methods such as

    pubic PersonDto CreateDto();
    public UpdateFromDto(personDto);

    DTO's are not part of the business domain, so you should never do this!

  3. The DTO you send out might not be the same type as you get back.
    For example you don't want the user to edit the order number or the date raised. If there are only a couple of properties like this you might opt to use the same DTO class but just ignore those properties when updating the domain instances, or you might decide on Request/Response DTOs

  4. The DTOs should have no knowledge of the domain classes.
    This is because the DTO classes will be shared between the client and the server, and the client will have no domain class knowledge. If you have domain classes on the client then you probably don't need DTOs.

  5. There is only one way to update domain instances from a specific DTO.
    A DTO doesn't always update domain classes, but when you receive a specific kind of DTO and need it to update domain instances it will always update in the same fasion.
So, how do you create DTOs from domain instances and how do you update domain instances from DTOs? The first thing to be aware of is that this code belongs in a layer above the domain layer, in a services layer or the app layer itself. Then we need some way of saying

  "I want to convert this Order to an OrderSummaryDto"

or

  "I want to convert this Order to an OrderDto, which includes its order lines"

and then finally

  "I have received a DTO, I need to update the relevant domain instances"

We need to do this in a reusable way, because the same DTOs may be reused in various parts of the application layer. To achieve this I used the Dependency Injection Container from Microsoft named "Unity".

Naming conventions I used are
  • xxxDtoFactory - Creates a DTO of a specific type from a specific domain instance.
  • xxxDtoTranslater - Takes a DTO of a specific type and translates its values back into the domain instances (updating existing instances, creating new instances, or whatever is required.)
The example model has a class named "Template" which has only a "string Name" property. It is an aggregate root so it owns multiple TemplateProperty instances. TemplateProperty is an abstract class with two concrete descendants; TemplateStringProperty and TemplateBooleanProperty.

When I create a DTO I want a TemplateDto and added to its Properties I want TemplateStringPropertyDto and TemplateBooleanPropertyDto. When I translate TemplateDto I want to update the existing object.

To register a factory
container.RegisterType<IDtoFactory<Template, TemplateDto>, TemplateDtoFactory>(
    new ContainerControlledLifetimeManager()
    );


The structure is IDtoFactory<TDomainClass, TDataTransferObject> to identify the domain class and desired DTO class, followed by the type that implements the interface to perform this DTO creation. This code registers a factory that takes a Template domain class and creates a TemplateDto data transfer object.

To register a translater
container.RegisterType<IDtoTranslater<TemplateDto>, TemplateDtoTranslater>(
    new ContainerControlledLifetimeManager()
    );


The structure is more simple. A DTO can only be translated in one way, so the generic IDtoTranslater interface only requires a single type, TDataTransferObject. This code registers a translater that takes a data transfer object and maps it to the domain.

Once registered the services are used like so:
//Create a template
Template template1 = new Template();

//Create a DTO from the template
TemplateDto template1Dto = container.Resolve<IDtoFactory<Template, TemplateDto>>().CreateDto(template1);

//Update the template from the DTO
container.Resolve<IDtoTranslater<TemplateDto>>().UpdateBusinessObjects(null, null, template1Dto);


Note that the "container.Resolve" code wouldn't really be there, you would use dependency injection. I have used it in this example to avoid turning it into a dependency-injection-container blog :-)

The "null, null" here are
  • The object space that the app layer has created and is using for your domain instances to work inside. Also known as your unit of work, transaction, or other.
  • An addition context. For example when updating a Template the TemplateDtoTranslater will resolve services to translate each PropertyDto it encounters, and pass the Template as the context so that we know which aggregate root we are working with.
For now I will link to the full source code for the example. If I get time I will blog some more explaining how the code works.

2009-01-20

How often should I test?

I am lazy.  I don't mean that I don't work, I mean that I like to get my work done with as little effort as possible.  Writing tests before my code used to look like too much extra work, but I've realised just how much time they actually save me.

When you make a small change to something it's very easy to think to yourself "That's such a small change, I can't see how it can possibly fail", what I have also realised is this really means "Despite this being a small change, it will fail and I can't possibly see how".

I recently observed a change to some code that introduced a simple if statement, and all existing tests passed.  The problem is that the existing tests only checked the expected behaviour worked (which it still did), but by introducing the "if" statement (and an additional parameter on the method) the developer had changed the expected behaviour under certain circumstances.  Thinking it was so simple it couldn't possibly fail he checked in his changes.  I happened to be changing something in the same code and spotted the changes, and realised immediately that his changes would actually result in files being deleted that are still required.

So, how often should you write tests?  I think this site sums it up very well
http://howoftenshoulditest.com

2008-10-24

How I would like to write code

Aspect Oriented Programming looks great. It's something I have always wanted to use, but I avoid it because it doesn't work exactly how I would like it to. The first thing I want to avoid is lots of reflection at runtime (compile time is fine), it is for this reason I have mainly been interested in PostSharp.

PostSharp looks great! You decorate your class with attributes that you create yourself. After compiling your assembly PostSharp inspects the result for Attributes that are descended from one of its own special PostSharp AOP classes. When it sees these attributes it modifys your code in a specific way.

To use the same example as everyone else in the world (yawn) you could create an attribute from the method-boundard attribute. Override the methods declared on that attribute for entry/exit of the method, and write some code in there to write to the IDE's output window using System.Diagnostics.Debug.WriteLine(). Now when I add that attribute to a method on one of my classes I get that code injected into my method automatically

[Logging]
public void DoSomething()
{
}

would be the same as

public void DoSomething()
{
System.Diagnostics.Debug.WriteLine("Enter");
try
{
}
finally
{
System.Diagnostics.Debug.WriteLine("Exit");
}
}

So, this looks great, but what's so good / bad about it?

Good


On the good side it lets me describe code functionality rather than writing it. Instead of looking at loads of lines of code to see what something does we can look at a .NET attribute which simply tells us what generic task the class/method does. Take Assets as an example. An Asset is worth money so its location and movement history must always be tracked. To achieve this I would have assets always linked to an AssetHolder class...

Asset holder


Now, what qualifies as an asset holder? You could say that a person has an asset, a department could be using it, it could be in a stock room, or it could be out on hire to a customer.

Digressing slightly...Don't descend these classes from AssetHolder! Holding assets is something a person DOES, not something a person IS!

Person with asset holder


Now my Person class can hold assets. I can obtain its AssetHolder instance via its IAssetHolder interface, from there I can create AssetTransfer instances and so on. Now I have to implement the same thing for Department, Room, and Customer! Wouldn't it be nice if I could just do this?

[AssetHolder]
public class Person { }

[AssetHolder]
public class Department { }

[AssetHolder]
public class Room { }

[AssetHolder]
public class Customer { }

That's the good side of PostSharp. I could create an AOP attribute called "AssetHolderAttribute" and then build my business classes with that attribute adorning them. After a successful compilation PostSharp would discover these attributes and do the following to each of my classes

01: Add a member AssetHolder to the class.
02: Ensure that member is created within the constructor.
03: Implement IAssetHolder, returning this new member from the GetAssetHolder method

What we have is a very small amount of code (two words stuck together, wrapped in a pair of square brackets) which not only clearly identifies a feature of these classes, but even implements it

Bad


This is the only thing that is stopping me from using this approach to write applications. PostSharp cannot add the code until after the assembly has finished compiling. This means that an assembly consuming the business classes can easily write code such as

IAssetHolder holder = (IAssetHolder)person;

but code within the same assembly as the Person class cannot, because Person does not implement IAssetHolder until after the assembly has successfully compiled. A catch-22 situation!

How I would like to write code


I would love to see pre-compile time support for AOP, so that it feels like part of the development experience rather than like something that has been bolted onto my binary afterwards. Instead of a UML diagramming tool to help me to design my complex models, what I would like to do is to use the diagramming tool to design patterns that occur frequently in applications (archetypes), and then in simple code decorate my classes to identify what they are capable of...

[BusinessObject] //Adds persistence capability using your favourite framework
[AssetHolder] //Responsible for holding assets
[StockHolder] //Responsible for holding stock
[RoleHolder] //Can be assigned roles, such as Customer, Supplier, etc
public class Person
{
[DataMember] //This property is persistent
public string FirstName { get; set; }
}

//I could now write code like this
Person p = new Person(EcoSpace);
decimal heldValue = p.AssetHolder.GetHeldAssetValue();
heldValue += p.StockHolder.GetHeldStockValue();

Or for defining the model itself using simple code...

[BusinessObject]
public class PurchaseOrder
{
//Persistent property that is an AutoIncrementing column in the DB
[DataMember(SaveAction = SaveActionKind.DbAssigned)]
public int OrderNumber { get; private set; }
}

[BusinessObject]
public PurchaseOrderLine
{
//Also adds a property to PurchaseOrder called "Lines"
[Association(Aggregation = AggregationKind.Composite, OtherEndName="Lines")]
public PurchaseOrder Order {get; set; }
}

//I could now write code like this
var order = new PurchaseOrder(EcoSpace);
var line = new PurchaseOrderLine(EcoSpace);
line.Order = order;

//AOP created property "PurchaseOrder.Lines"!
if (order.Lines.Count != 1 || order.Lines[0] != line)
throw new ........

The future


I really hope to see pre-compiler support for AOP in the future in C#. I've been working on one of the guys at RemObjects recently who writes the Oxygene .NET language compiler (formerly known to me as "Chrome"). I have always thought their language has some really nice features in it, but never started using it because of two reasons

01: No ECO support.
02: I wanted to stay in C# to keep my skills more "main-stream".

Oxygene support is being worked on in ECO 5, which only leaves the second point. I really like some of their features as I said, but not quite enough to make me switch. However, if you are going to gain massively from working in a less popular language then switching to that language is the right thing to do, and this would be that feature for me! If in the future I need a C# job I believe my skills will remain current enough to be able to not exclude me from this part of the job market.

In the meantime I will continue to use C# and hope that some relentless annoying nagging will get me what I want on the Oxygene front, and that my dream of writing solid reliable code with very little effort takes that big leap towards becoming a reality! :-)

2008-10-17

Sufficient architecture

I have a friend who once decided to teach himself VB. We had a mutual friend who was the manager of a video shop so he decided to write some new software for him. He started off by creating an MS Access database to hold the data for his application. Each week I'd pop around to his house and he'd have MS Access open, reworking his tables etc. After a few months I asked "Have you started to write the app yet?".

"No, not yet." He replied, "I am trying to get the DB right first, then I will get onto writing the application". I asked what he meant by getting the DB right. He explained that he wanted to make this the best video-hire software ever, and that I should hear about some of the features that are going into it as a result of the DB structure he has made.

  • Record the cast of the film.
  • Record the director, producer etc.
  • Record the genre; horror, comedy, etc.
  • Record film information for films due to be released in the future.

All sorts of stuff. His idea was that he could analyse rental history and match up upcoming releases with people who might be interested in them. Then the computer could do a mailshot to the relevant people, enticing them to come back and rent out a film or two.

"Great ideas!" I said, "but.....Can members rent out films?"

"No, " he replied, "I haven't done that bit yet!"

This is the message I always try to get across when I am helping to design a new system. I either meet people who want their app to be a Swiss army knife application (who inevitably deliver late, and deliver rubbish) or those who want to write any old rubbish to get the job done and get it out of the door (which inevitably falls apart whenever a change is required). Someone once actually said to me "The problem with doing things properly is that it takes too long!".

Anyway, I read someone else's post on this subject this morning and just wanted to say

HEAR HEAR!

Don't go designing apps with features nobody has asked for, if there is something the customer wants in the future they are quick enough to ask for it, and if they really want it they will pay for it too!

2008-10-12

Accommodation manager - runtime error

I've just spotted something I omitted before zipping up my AccommodationManager app and making it available.

When you run the app in Release mode you will experience SQL errors. These errors are intermittent, and a query that worked only seconds ago might not always work. The reason for this is that ECO executes all queries within a transaction; SQLite creates a journal file for every transaction and then deletes it when done; and my anti-virus decides it wants to take a look at this new journal file to see what's inside it; resulting in SQLite not being able to open its own journal file exclusively.

This was something I noticed a while ago in another ECO+SQLite app of mine and the guy who writes the library spent a couple of hours with me on MSN trying to work out what the problem was (he had a fix to me by the next morning!). Anyway, I have updated the project so that the connection string tells SQLite not to delete the journal file when it has finished with it, as a consequence there is no conflict with Avast! anti-virus.

In case you don't want to download the example project again, here is the change you need to make inside AccommodationManager\AccommodationManagerPMP.cs

  private void ConfigureConnectionString()
  {
#if !DEBUG
    sqLiteConnection1.ConnectionString = string.Format(
      "Data Source={0};Version=3;Fail If Missing=True;Journal Mode=Persist", Settings.DataBaseFileName);
#endif
  }

2008-10-08

ECO, Winforms, ASP.NET, and WCF

The technologies I used in an app I wrote for friends recently. The app manages properties at different locations, bookings, and tariffs. In addition to this the application (which uses SQLite) connects to their website using WCF and updates their database so that people can check prices and availability.

I need to get them using it now so that there is data available by the time I put up their website.

Implementing complex unit testing with IoC

I have a method that looks something like this

public void DoSomething(SomeClass someInstance, User user)
{
  var persistence = someInstance.AsIObject.GetEcoService<IPersistenceService>();
  persistence.Unload(someInstance.AsIObject());
  
  if (someInstance.CurrentUser != null)
    throw new ..........;
  someInstance.CurrentUser = user;

  persistence.UpdateDatabase(someInstance);
}


This unloads the local cache before ensuring someInstance.CurrentUser == null, it then sets someInstance.CurrentUser and updates the DB. The unit test I wanted would check what happens when two users try to perform this at the same time. What I wanted was

User A: Unload
User B: Unload
User A: Check == null, it is
User B: Check == null, it is
User A: Change + update DB
User B: Change + update DB + experience a lock exception

What I didn't want was

User A: Unload
User B: Unload
User A: Check == null, it is
User A: Change + update DB
User B: Check == null, it isn't

To achieve two things running at once I need to either

A: Execute a line of code at a time for each user within the unit test instead of executing the method.
B: Execute the same method from two different threads.

Option A is easy to follow but is rubbish because it involves copying the method source out into the test method, no thanks! Option B is good but harder because I need to ensure that the two threads execute the lines of code in sync. What I really could do with is sync code inside the method, but there is no way I want to add additional sync logic because it is only needed for testing! However, there is already an opportunity to inject some code into the method; take a look here

  var persistence = someInstance.AsIObject.GetEcoService<IPersistenceService>();
  ...
  persistence.UpdateDatabase(someInstance);


Because we are using Inversion of Control it means that the method will call UpdateDatabase() on the reference we pass rather than a hard-coded reference. This means that we could quite easily replace the IPersistenceService with our own implementor and intercept the call to UpdateDatabase.

  public class PersistenceServiceWithEvents : IPersistenceService
  {
    //An event to call back
    public event EventHandler BeforeUpdateDatabase;

    //A reference to the original persistence service
    private readonly IPersistenceService PersistenceService;

    //Constructor
    public PersistenceServiceWithEvents(IEcoServiceProvider serviceProvider)
    {
      PersistenceService =
        serviceProvider.GetEcoService<IPersistenceService>();
    }

    //Interface implementation
    public void UpdateDatabase(IObjectProvider obj)
    {
      EventHandler handler = BeforeUpdateDatabase;
      if (handler != null)
        handler(this, EventArgs.Empty);
      
    }

    //Other methods omitted
  }


Now I have the opportunity to call back some code just before the real UpdateDatabase is executed, which gives me the chance to insert some thread sync' code:

  [TestMethod]
  public void HandlesConcurrency()
  {
    //Create a shared object to work with
    var ecoSpace = TestHelper.EcoSpace.Create();
    var someInstance = new SomeClass(ecoSpace);
    ecoSpace.UpdateDatabase();
    //Get it's external ID
    string someInstanceID = ecoSpace.ExternalIds.IdForObject(someInstance);

    //Create a thread sync object which both threads will wait for before
    //passing on their UpdateDatabase call to the original persistence servce
    var startUpdateDB = new ManualResetEvent(false);

    //Create a thread sync object which tells the test when this thread has called
    //PersistenceService.UpdateDatabase
    var user1ReadyToUpdateDB = new AutoResetEvent(false);

    bool user1Conflict = false;
    var user1ThreadStart = new ThreadStart(
        delegate()
        {
          HandlesConcurrency_UserEmulation(
            ecoSpace.PersistenceMapper,
            someInstanceID,
            user1ReadyToUpdateDB,
            startUpdateDB,
            out user1Conflict);
        }
      );


    //Create a thread sync object which tells the test when this thread has called
    //PersistenceService.UpdateDatabase
    var user2ReadyToUpdateDB = new AutoResetEvent(false);

    bool user2Conflict = false;
    var user2StartUpdateDB = new AutoResetEvent(false);
    var user2ThreadStart = new ThreadStart(
        delegate()
        {
          HandlesConcurrency_UserEmulation(
            ecoSpace.PersistenceMapper,
            someInstanceID,
            user2ReadyToUpdateDB,
            startUpdateDB,
            out user2Conflict);
        }
      );

    //Create and start both threads
    var user1Thread = new Thread(user1ThreadStart);
    var user2Thread = new Thread(user2ThreadStart);
    user1Thread.Start();
    user2Thread.Start();

    //Wait until they have both signalled that the call to UpdateDatabase has been reached
    user1ReadyToUpdateDB.WaitOne();
    user2ReadyToUpdateDB.WaitOne();

    //Both threads have now executed DoSomething() as far as the call to
    //persistenceService.UpdateDatabase and are waiting for me to tell them
    //to continue.
    startUpdateDB.Set();

    //Both threads will now wake up and call the original PersistenceService.UpdateDatabase.
    //Wait for both threads to finish so that the test does not end prematurely.
    user1Thread.Join();
    user2Thread.Join();

    //Check at least one experienced a conflict
    int lockCount = 0;
    if (user1Conflict)
      lockCount++;
    if (user2Conflict)
      lockCount++;
    Assert.AreEqual(1, lockCount, "One should experience a lock conflict");
  }

  private void HandlesConcurrency_UserActionEmulation(
    PersistenceMapper persistenceMapper,
    string someInstanceID,
    AutoResetEvent readyToUpdateDB,
    WaitHandle startUpdateDB,
    out bool lockConflict)
  {
    //SETUP

    //Create secondary ecospaces with the same persistence mapper as the
    //original. This is because my test EcoSpace uses a memory persistence mapper
    var ecoSpace = TestHelper.EcoSpace.CreateWithSharedMapper(persistenceMapper);

    //Create the persistence service with the BeforeUpdateDatabase event
    var callbackPersistenceService = new TestHelper.PersistenceServiceWithEvents(ecoSpace);
    //Register it
    ecoSpace.RegisterEcoService<IPersistenceService>(callbackPersistenceService);

    //Ensure update happens in sync, this occurs when we try to lock the object
    callbackPersistenceService.BeforeDatabaseUpdate += (sender, args) =>
      {
        //Notify we are ready to update
        readyToUpdateDB.Set();
        //Now wait to be told to complete the update
        startUpdateDB.WaitOne();
      };

    //ACT

    //Now test our code
    var someServiceToTest = new SomeService(ecoSpace);
    var someInstance = ecoSpace.ExternalIds.ObjectForId(someInstanceID).GetValue<SomeClass>();
    try
    {
      lockConflict = false;
      someServiceToTest.DoSomething(someInstance, new User(ecoSpace));
    }
    catch (OptimisticLockException)
    {
      lockConflict = true;
    }
  }


So there you have it. An example of how Inversion of Control and the Service Provider pattern can enable you to redirect calls in parts of your code to enable complex testing scenarios without having to change your implementation!

2008-09-30

Unit testing security

Following on from my previous post about using(Tricks) here is an example which makes writing test cases easier rather than just for making your code nicely formatted. Take a look at the following test which ensures Article.Publish sets the PublishedDate correctly:

[TestMethod]
public void PublishedDateIsSet()
{
  //Create the EcoSpace, set its PMapper to a memory mapper
  var ecoSpace = TestHelper.EcoSpace.Create();

  //Creat an article
  var article = new Article(ecoSpace);

  //Create our Rhino Mocks repository
  var mocks = new MockRepository();

  //Mock the date/time to give us a predictable value
  var mockDateTimeService = mocks.StrictMock<IDateTimeService>();
  ecoSpace.RegisterEcoService(typeof(IDateTimeService), mockDateTimeService);

  //Get a date/time to return from the mock DateTimeService
  var now = DateTime.Now;
  using (mocks.Record())
  {
    //When asked, return the value we recorded earlier
    Expect.Call(mockDateTimeService.Now).Return(now);
  }

  //Check mockDateTimeService.Now is called from Article.Publish
  using (mocks.Playback())
  {
    article.Publish();
  }

  //Check the date/time from IDateTimeService is stored in PublishedDate
  Assert.AreEqual(now, article.PublishedDate);
}


The method it is testing

public void Publish()
{
  IEcoServiceProvider serviceProvider = AsIObject().ServiceProvider;
  var dateTimeService = serviceProvider.GetEcoService<IDateTimeService>();
  PublishedDate = dateTimeService.Now;
}


Now at some point in the future you decide to enforce some security within your business objects. You decide that only certain people can publish your article, such as the author or an administrator. This will now break every test you have which assumes article.Publish will just work.


public void Publish()
{
  IEcoServiceProvider serviceProvider = AsIObject().ServiceProvider;

  var currentUserService = serviceProvider.GetEcoService<ICurrentUserService>();
  var currentUser = currentUserService.CurrentUser;
  if (currentUser != this.Author && !currentUser.HasRole<SystemAdministratorRole>())
    throw new SecurityException("Cannot publish this article");

  var dateTimeService = serviceProvider.GetEcoService<IDateTimeService>();
  PublishedDate = dateTimeService.Now;
}


This would dramatically complicated any test which relied on having a published article. Each time you would additionally have to:
01: Create a user.
02: Mock ICurrentUserService to return that user.
03: Ensure the article.Author is set to that user, or the user owns a SystemAdministratorRole.

If you have 20 tests requiring a published article this is going to cause you a lot of work! The first mistake here is that the article is testing for permissions; permission granting should be a service. I would separate the service out like so...

public interface IPermissionService
{
  bool MayPublishArticle(Article article);
}


Your EcoSpace would have a class implementing this interface and return the appropriate result, the EcoSpace would register this default service.

public class PermissionService : IPermissionService
{
  IEcoServiceProvider ServiceProvider;
  
  public PermissionService(IEcoServiceProvider serviceProvider)
  {
    ServiceProvider = serviceProvider;
  }

  public bool MayPublishArticle(Article article)
  {
    var currentUserService = serviceProvider.GetEcoService<ICurrentUserService>();
    var currentUser = currentUserService.CurrentUser;
    return currentUser == article.Author
      || currentUser.HasRole<SystemAdministratorRole>());
  }
}


In the EcoSpace:
public override bool Active
{
  get { return base.Active; }
  set
  {
    if (value && !Active)
      RegisterDefaultServices();
    base.Active = value;
  }
}

private void RegisterDefaultServices()
{
  RegisterEcoService(typeof(IPermissionService), new PermissionService(this));
  RegisterEcoService(typeof(ICurrentUserService), new CurrentUserService());
  RegisterEcoService(typeof(IDateTimeService), new DateTimeService());
}


Now Article.Publish looks like this

public void Publish()
{
  IEcoServiceProvider serviceProvider = AsIObject().ServiceProvider;
  var permissionService = serviceProvider.GetEcoService<IPermissionService>();

  if (!permissionService.MayPublishArticle(this))
    throw new SecurityException("Cannot publish this article");

  var dateTimeService = serviceProvider.GetEcoService<IDateTimeService>();
  PublishedDate = dateTimeService.Now;
}


But how does this help? The first advtange is that we have separated the PermissionService so that we can test it in isolation, but we would still need to mock the IPermissionService wouldn't we? Yes we would! But how does this look?

using (TestHelper.Permissions.PermitAll(ecoSpace))
{
  article.Publish();
}


Much more simple eh? To achieve this I have a static class named Permissions in my test project

public static class Permissions
{
  private class TestPermissionService : IPermissionService
  {
    (simple code omitted)
    Accept a boolean in the constructor, and return
    it for every method call.
  }

  public static IDisposable Allow(MyEcoSpaceType ecoSpace)
  {
    var originalService = ecoSpace.GetEcoService<IPermissionService>();
    var newService = new TestPermissionService(true);
    ecoSpace.RegisterEcoService(typeof(IPermissionService), newService);
    return DisposableAction(
      () => ecoSpace.RegisterEcoService(typeof(IPermissionService), originalService)
    );
  }
}


All this does is to record the current IPermissionService, register a new one which always returns True/False (depending on what we pass to its constructor), and then return an instance of DisposableAction. To this instance we pass an Action which re-registers the original service. The action is called when IDisposable.Dispose is called:

public class DisposableAction : IDisposable
{
  Action Action;
  public DisposableAction(Action action)
  {
    Action = action
  }

  void IDisposable.Dispose()
  {
    Action();
  }
}



So the following line

using (TestHelper.Permissions.PermitAll(ecoSpace))
{
  article.Publish();
}


will
01: Record the original IPermissionService.
02: Register a new one which always returns True.
03: Execute the code within the Using { } block.
04: IDisposable.Dispose will be called on my DisposableAction.
05: The previous service will be restored.

Single instance application - revisited

Not so long ago I posted a solution to having a single-instance application. Rather than just preventing secondary instances from running the requirement was to have the 2nd instance pass its runtime parameters onto the 1st instance so that it can process them. My solution used remoting on the local machine. This appeared to work very well until recently when I needed an OpenFileDialog. Attempting to show the dialog resulted in an error about COM thread apartments. So, it wasn't THE solution.

After a bit of research I decided to use named pipes instead. This meant I had to upgrade my app from .NET 2 to 3.5, but I think it is worth it. To implement the feature in an app you need to do 2 things. First you need to realize the interface ISingleInstanceApplicationMainForm on your app's main form in order to accept command line arguments from any subsequently started instances. Next you need to change your Program.Main method like so:

[STAThread]
static void Main(string[] args)
{
  Application.EnableVisualStyles();
  Application.SetCompatibleTextRenderingDefault(false);
  new SingleInstanceApplication<MainForm>("CompanyName.ApplicationName", args);
}



The source code for SingeInstanceApplication is an adaptation of some code I read here; and here it is:


public interface ISingleInstanceApplicationMainForm
{
  void AcceptCommandLineArguments(string[] args);
}

public class SingleInstanceApplication<TForm> : IDisposable
  where TForm : ISingleInstanceApplicationMainForm
{
  Mutex Mutex;
  bool IsFirstInstance;
  string ApplicationUniqueID;
  protected TForm MainForm { get; private set; }

  public SingleInstanceApplication(string applicationUniqueID, string[] args)
  {
    ApplicationUniqueID = applicationUniqueID;
    Mutex = new Mutex(true, ApplicationUniqueID, out IsFirstInstance);
    if (IsFirstInstance)
    {
      MainForm = (TForm)Activator.CreateInstance(typeof(TForm));
      (MainForm as ISingleInstanceApplicationMainForm).AcceptCommandLineArguments(args);
      ThreadPool.QueueUserWorkItem(new WaitCallback(ListenForArguments));
      Application.Run((Form)(object)MainForm);
    }
    else
      PassArgumentsToFirstInstance(args);
  }

  private void ListenForArguments(Object state)
  {
    try
    {
      using (var server = new NamedPipeServerStream(ApplicationUniqueID))
      {
        using (var reader = new StreamReader(server))
        {
          server.WaitForConnection();

          string[] args = reader.ReadLine().Split(new char[] {'\t'}, StringSplitOptions.RemoveEmptyEntries);
          ThreadPool.QueueUserWorkItem(new WaitCallback(ReceiveArgumentsFromNewInstance), args);
        }//using reader
      }//using server
    }
    catch (IOException) { }
    finally
    {
      ListenForArguments(null);
    }
  }

  private void ReceiveArgumentsFromNewInstance(Object state)
  {
    string[] args = (string[])state;
    (MainForm as Form).Invoke(
      new ThreadStart(
        delegate()
        {
          MainForm.AcceptCommandLineArguments(args);
        }
        )
      );
  }

  private void PassArgumentsToFirstInstance(string[] args)
  {
    var builder = new StringBuilder();
    foreach (var arg in args)
      builder.AppendFormat("{0}\t", arg);

    try
    {
      using (var client = new NamedPipeClientStream(ApplicationUniqueID.ToString()))
      {
        using (var writer = new StreamWriter(client))
        {
          client.Connect(500); // 0.5 seconds timeout
          writer.WriteLine(builder.ToString());
        }//using writer
      }//using client
    }
    catch (TimeoutException) { }
    catch (IOException) { }
  }




  void IDisposable.Dispose()
  {
    if (IsFirstInstance && Mutex != null)
      Mutex.ReleaseMutex();
  }

}

2008-09-24

using(TricksToFormatYourCodeNicely)

I've been writing a data importer which takes a specific data input format and outputs XML, this XML is then imported within my application. What annoyed me was the way in which the source code was formatted....


writer.WriteStartElement("data");
writer.WriteAttributeString("1", "1");
writer.WriteAttributeString("2", "2");
writer.WriteAttributeString("3", "3");

writer.WriteStartElement("systemData");
writer.WriteAttributeString("a", "a");
writer.WriteAttributeString("b", "b");
writer.WriteEndElement();//systemData

writer.WriteEndElement();//data


It just didn't look nice. I thought about splitting it into separate methods, but most of the time this would have been overkill as the methods would have been very short. Instead I wrote an extension method on XmlWriter:


public static class XmlWriterHelper
{
  public static IDisposable StartElement(this XmlWriter writer, string elementName)
  {
    return new DisposableElementWriter(writer, elementName);
  }

  private class DisposableElementWriter : IDisposable
  {
    private XmlWriter Writer;

    public DisposableElementWriter(XmlWriter writer, string elementName)
    {
      Writer = writer;
      Writer.WriteStartElement(elementName);
    }

    public void Dispose()
    {
      Writer.WriteEndElement();
    }
  }
}



Now I can write code like this instead:

using (writer.StartElement("data"))
{
  writer.WriteAttributeString("1", "1");
  writer.WriteAttributeString("2", "2");
  writer.WriteAttributeString("3", "3");
  using (writer.StartElement("systemData"))
  {
    writer.WriteAttributeString("a", "a");
    writer.WriteAttributeString("b", "b");
  }//systemData
}//data



Less code AND easier to read. What a bonus!

2008-07-19

Validating NumericUpDown on compact framework

A customer requested that instead of my NumericUpDown controls silently capping the input value within the Minimum..Maximum range it instead showed an error message telling the user their input is incorrect and that they need to alter it.

I was a bit annoyed to see that NumericUpDown.Validating is never called on the compact framework, in addition there was no way to get the input value and either accept or reject it before it is applied to its data bindings.

There's an article here which shows how to implement auto-select text when the NumericUpDown receives focus and I have been using it since Feb 2006. I decided to extend upon the techniques within it to implement the Validating event. My goal was to fire the Validating event before the value is applied to all data-bindings, but also to allow the programmer to read NumericUpDown.Value in order to determine the new value. To do this I had to replace the WndProc of the control so that I could handle the
WM_UPDOWN_NOTIFYVALUECHANGED message, parse the value, validate it, and then either accept it (call the original WndProc) or restore the value to the current value.

Rather than teach how this is done I thought I would just include the source code here. One point to note though is that I had to have a "bool IsInternalCall" wrapped around my handler otherwise I would have re-entrant problems and experience a stack overflow. Here is the source, it includes the auto-select code by Mark Arteaga.

using System;
using System.Collections.Generic;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Threading;
using System.ComponentModel;

namespace Mycompany.Windows.Forms
{
  public class NumericUpDownWithSelect : NumericUpDown, ISupportInitialize
  {
    #region API
    private const int GWL_WNDPROC = -4;
    private const int WM_UPDOWN_NOTIFYVALUECHANGED = 13;
    public const int WM_GETTEXTLENGTH = 0x000E;
    public const int WM_GETTEXT = 0x000D;
    private const int WM_GETSELECTION = 0x00B0;
    private const int WM_SETSELECTION = 0x00B1;

    private WndProcHandler NewWndProc = null;
    private IntPtr OldWndProc = IntPtr.Zero;

    public delegate IntPtr WndProcHandler(IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam);
    [DllImport("coredll.dll", SetLastError = true)]
    private static extern IntPtr SendMessage(IntPtr hWnd, int msg, int wParam, int lParam);
    [DllImport("coredll.dll", SetLastError = true)]
    private static extern IntPtr SendMessage(IntPtr hWnd, int msg, int wParam, StringBuilder buffer);
    [DllImport("coredll.dll", CharSet = CharSet.Auto)]
    public static extern IntPtr SetWindowLong(IntPtr hWnd, int nIndex, WndProcHandler wndproc);
    [DllImport("coredll.dll", CharSet = CharSet.Auto)]
    public static extern IntPtr SetWindowLong(IntPtr hWnd, int nIndex, IntPtr dwNewLong);
    [DllImport("coredll.dll", CharSet = CharSet.Auto)]
    public static extern IntPtr GetWindowLong(IntPtr hWnd, int nIndex);
    [DllImport("coredll.dll", CharSet = CharSet.Auto)]
    public static extern IntPtr CallWindowProc(IntPtr wndProc, IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam);
    #endregion

    private bool ControlDisposed = false;
    private bool IsValidating = false;
    private decimal ValueToValidate;

    public NumericUpDownWithSelect()
    {
    }

    public new event CancelEventHandler Validating;
    protected virtual void OnValidating(out bool cancel, decimal newValue)
    {
      cancel = false;
      CancelEventHandler validating = Validating;
      if (validating == null)
        return;

      cancel = false;
      CancelEventArgs args = new CancelEventArgs(false);
      IsValidating = true;
      try
      {
        ValueToValidate = newValue;
        Validating(this, args);
      }
      finally
      {
        IsValidating = false;
      }
      cancel = args.Cancel;
    }

    private decimal currentValue = 0;
    public new decimal Value
    {
      get
      {
        if (IsValidating)
          return ValueToValidate;
        return base.Value;
      }
      set
      {
        bool cancel;
        OnValidating(out cancel, value);
        if (!cancel)
        {
          base.Value = value;
          currentValue = value;
        }
      }
    }

    #region Validation
    protected override void OnHandleCreated(EventArgs e)
    {
      base.OnHandleCreated(e);
      if (this.Site == null)
      {
        NewWndProc = new WndProcHandler(ReplacementWndProcImpl);
        OldWndProc = SetWindowLong(this.Handle, GWL_WNDPROC, NewWndProc);
      }
    }

    private static bool IsInternalCall = false;
    private IntPtr ReplacementWndProcImpl(IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam)
    {
      bool cancelled = false;
      if (msg == WM_UPDOWN_NOTIFYVALUECHANGED && !IsInternalCall)
      {
        IsInternalCall = true;
        try
        {
          int length = CallWindowProc(OldWndProc, this.Handle, WM_GETTEXTLENGTH, IntPtr.Zero, IntPtr.Zero).ToInt32();
          StringBuilder buffer = new StringBuilder(length + 1);
          SendMessage(this.Handle, WM_GETTEXT, length + 1, buffer);
          try
          {
            decimal newValue = decimal.Parse(buffer.ToString());
            OnValidating(out cancelled, newValue);
            if (cancelled)
            {
              Value = currentValue;
            }
          }
          catch (FormatException)
          {
            cancelled = true;
          }
        }
        finally
        {
          IsInternalCall = false;
          if (cancelled)
          {
            Focus();
            SelectAll();
          }
        }
      }
      return CallWindowProc(OldWndProc, hWnd, msg, wParam, lParam);
    }
    #endregion

    #region AutoSelect

    private delegate void SelectAllInvoke();

    private bool suppressOnGotFocus = false;

    protected override void OnGotFocus(EventArgs e)
    {
      base.OnGotFocus(e);
      if (!this.suppressOnGotFocus)
        SelectAll();
    }

    public void SelectAll()
    {
      this.SelectInternal(0, this.Value.ToString().Length);
    }

    public void Select(int start, int length)
    {
      this.SelectInternal(start, length);
    }

    private void SelectInternal(int start, int length)
    {
      if (!ControlDisposed)
      {
        this.suppressOnGotFocus = true;
        if (!this.Focused)
          this.Focus();
        IntPtr ret = SendMessage(this.Handle, WM_SETSELECTION, start, length);
        this.suppressOnGotFocus = false;
      }
    }
    #endregion

    protected override void Dispose(bool disposing)
    {
      ControlDisposed = true;
      base.Dispose(disposing);
    }

    #region ISupportInitialize Members
    //This region is here simply because the WinForm designer insists on casting this control
    //to ISupportInitialize
    void ISupportInitialize.BeginInit()
    {
    }

    void ISupportInitialize.EndInit()
    {
    }

    #endregion
  }

}


And how might you use it?

private void numericUpDownWithSelect1_Validating_1(object sender, CancelEventArgs e)
{
  if (numericUpDownWithSelect1.Value < numericUpDownWithSelect1.Minimum
    || numericUpDownWithSelect1.Value > numericUpDownWithSelect1.Maximum)
  {
    //No need to cancel, the new value will be rejected
    MessageBox.Show("Warning, value is about to be capped");
  }
  if (numericUpDownWithSelect1.Value > 5)
  {
    MessageBoxIcon icon = new MessageBoxIcon();
    e.Cancel =
      MessageBox.Show(
    "Is it really greater than 5?",
        "Are you sure?",
        MessageBoxButtons.YesNo,
        icon,
        MessageBoxDefaultButton.Button1) != DialogResult.Yes;
  }
}

2008-07-08

Single instance application

An app I am working on needs to be a single instance. It is associated with certain file extensions so that when I select a character or license file it will be imported automatically. When the user buys a character or license (etc) from the website it will be downloaded and opened, and then imported.

Obviously it is a pretty poor user experience if they have to close the app, download, close the app, download... So what I really needed was a way to have the 2nd instance of the application to invoke the first instance and pass the command line parameters. Here is a simple solution I implemented using remoting.

01: An interface

public interface ISingleInstance
{
void Execute(string[] args);
}


02: A class that implements the interface

public class SingleInstance : MarshalByRefObject, ISingleInstance
{
  private static object SyncRoot = new object();
  private static Form1 MainForm;

  static SingleInstance()
  {
    Application.EnableVisualStyles();
    Application.SetCompatibleTextRenderingDefault(false);
  }

  public void Execute(string[] args)
  {
    bool isNew = false;
    lock (SyncRoot)
    {
      isNew = (MainForm == null);
      if (isNew)
        MainForm = new Form1();
      MainForm.AcceptCommandLineArguments(args);
    }
    if (isNew)
      Application.Run(MainForm);
  }
}


03: And finally the remoting code in the Program.cs file itself:

static class Program
{
  private static IpcChannel IpcChannel;

  [STAThread]
  static void Main(string[] args)
  {
    bool isNew;
    using (Mutex mutex = new Mutex(true, "TheCatSatOnTheMat", out isNew))
    {
      if (isNew)
        RegisterServer();
      else
      {
        IpcChannel = new IpcChannel("Client");
        ChannelServices.RegisterChannel(IpcChannel, false);
      }
      ISingleInstance app = (ISingleInstance)Activator.GetObject(typeof(ISingleInstance), "ipc://Server/RemotingServer");
      app.Execute(args);
    }
  }

  private static void RegisterServer()
  {
    IpcChannel = new IpcChannel("Server");
    ChannelServices.RegisterChannel(IpcChannel, false);
    RemotingConfiguration.RegisterWellKnownServiceType(typeof(SingleInstance), "RemotingServer", WellKnownObjectMode.Singleton);
  }
}


This is more of a note to myself in case I lose my small test app before I come around to implementing it into my main application :-)

More leak fixes

I have changed the DirtyObjectCatcher so that it initially only hooks

Form.Disposed - Automatically disposes the DirtyObjectCatcher
Form.Closed - Unhooks all additional form events (below)
Form.Shown - To hook additional form events (below)

==Additional form events==
Form.Activated
Form.MdiParent.Activated
Form.MdiChildActivate

The additional events are to ensure that the DirtyObjectCatcher's undo block is moved to the top. The reason that these events are now unhooked is so that there is no strong event references from the application's main form (Form.MdiParent) to this component, keeping it alive. Now we only have long-term event references from the owning form itself. Really though this is just an added precaution against something I may not have thought of :-)

The true memory saver comes from only holding a WeakReference to the owning form. Otherwise in an MDI application we have the following

MainForm.MdiChildActivate->DirtyObjectCatcher->Form

In such a case closing the MDI child form will not be collected it because it is referenced by the DirtyObjectCatcher, which cannot be collected because it is referenced by the application's main form. Unhooking these events and holding only a WeakReference prevents the leakage.

In addition I have hooked Form.Disposed so that this component is disposed along with its owner. I have also hooked DirtyObjectCatcher.Disposed from ObjectValidator so that it may also auto-dispose itself and knows not to place any new subscriptions.

Again the files are available here.

In addition I was able to track down and reproduce a couple of leaks in ECO 4 which are now fixed internally.

  1. Deactivating / Reactivating an EcoSpace would leak an object reference each time. This is not the case if you allow the EcoSpace to be collected.
  2. Using OclVariables with OclPsHandle and calling EcoSpace.Persistence.Refresh would leak a single object reference, also if OclPsHandle.Execute was executed.

I monitored my app with over 4,000 tasks today (normally there are about 200) for a few hours and the memory usage didn't budge!

2008-06-27

Memory leaks

As a follow up to yesterday's post here is a list of problems I found...

01: The following code for some reason causes the form holding the DirtyObjectCatcher to remain referenced.


if (Owner.MdiParent != null)
{
Owner.MdiParent.Activated += new EventHandler(MdiParent_Activated);
Owner.MdiParent.MdiChildActivate += new EventHandler(MdiParent_MdiChildActivate);
}
The odd thing about this is that it really is the events that matter! I subscribe Shown, Disposed, Activated on the Owner (which is a form) and that doesn't cause the same behaviour. The reason is that the Owner normally gets disposed and takes out the DirtyObjectCatcher with it, however, if I have Owner.MdiParent events referencing DirtyObjectCatcher then the form will never get disposed because DirtyObjectCatcher has a strong reference to Owner. Maybe I should change it, but for now the Shown and Activated events seem to be doing the trick.

02: This one was a real pain! DirtyObjectCatcher creates its own UndoBlock and then subscribes to it, so that whenever the user modifies an object it goes into the UndoBlock and the catcher is notified. At this point other components (such as ObjectValidator) can get a list of modified objects and do something (such as evaluate their OCL constraints).

Unfortunately this event fires at some point between the user making the change and the ECO cache being ready for the new value to be read. For this reason in the past I had to set

Application.Idle += CheckForModifiedObjects;

CheckForModifiedObjects would then trigger the relevant events in Application's Idle event so that we know the changes to the ECO cache are ready to be read. Unfortunately I made a silly mistake, I forgot to do this as the first line of CheckForModifiedObjects

Application.Idle -= CheckForModifiedObjects;

As a result the static Application class had a reference to my DirtyObjectCatcher via an event. DirtyObjectCatcher holds a reference to the form (Owner) and the EcoSpace (ServiceProvider). So the form and the EcoSpace would remain in memory.

03: In the past you could not subscribe to UndoBlock so I had to subscribe to DirtyListService. Unfortunately this would not get triggered if you activated a DirtyObjectCatcher on an object that was already dirty, or transient. To overcome this I added a CheckForUpdates method which would get every instance of DirtyObjectCatcher to check its UndoBlock for changes.

The idea was that you plug into the ECO cache and call the static DirtyObjectCatcher.CheckForUpdates whenever something changed. A pain in the ass. I requested UndoBlock.Subscribe and it was implemented within days (thanks Jonas!) but I left that code in just in case anyone was using it. Unfortunately this meant I had a static List sitting around holding references to my DirtyObjectCatchers. Again this referenced the EcoSpace and the Form, so the Form was never disposed in order to dispose of my DirtyObjectCatcher.

This wasn't noticed for some time because I use the AutoFormService to handle the lifetime of my forms, but my current app has quite a few forms where I manage the lifetime manually so the form wasn't automatically being disposed as I had expected. In addition the Form class does not call Dispose on any of its components when you call its Dispose method, which I find very strange!

Anyway, that's the lot. What a hellish week! I have uploaded the lastest version of EcoExtensions here:

www.peterlesliemorris.com/blogfiles/ecoextensions.zip

DirtyObjectCatcher

Oh boy, what a nightmare! After days of messing around I finally found where the memory leak is in my app, it was in DirtyObjectCatcher!

The DirtyObjectCatcher used to subscribe to the DirtyListService, so that it was notified whenever an object was made dirty. I experienced this problem...

01: User creates a "Call" to a customer site.
02: User edits a purchase order.
03: Save purchase order (merges the undo block to the Call undo block and closes the form)
04: Edit the purchase order again from the Call form

The PurchaseOrder is already dirty so it wont get triggered again, this used to result in no constraints being checked etc and the possibility of entering dodgy data. The solution at the time was to have a static list in DirtyObjectCatcher

private List Instances;

whenever a new instance was created it would be added, whenever Dispose was called it would be removed. I then hooked into the cache chain and whenever a value changed I would call a static method on DirtyObjectCatcher which iterated over Instances and told each to check if anything had changed. It worked fine enough, and I put in a request to add a Subscribe method to IUndoBlock.

My request for IUndoBlock.Subscribe was soon added so I changed the code. Now it worked lovely! Unfortunately there was a problem! I had left the static code in the component just in case anyone was using it, I should really have just removed it as it is now obsolete. The really big problem however was that I had never added a Finalizer to the DirtyObjectCatcher class to call Dispose(). I had assumed the following in the Windows.Form default code template would call Dispose.



private System.ComponentModel.IContainer components = null;
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}



However I was wrong. The components on the form are never added to this "components" member. As a result the Instances list grew and grew, and was never emptied. DirtyObjectCatcher holds a reference to the EcoSpace, so this is why the memory usage got bigger and bigger! I was going to add the Finalizer and remove the instance from Instances, but I have decided to remove Instances completely!

Update: There were some other odd issues too which I will blog about tomorrow after I have released an update!

2008-06-11

Making a generic type from a Type

List<Person> p = new List<Person>();

This works fine, but what about this?

Type someType = typeof(Person);
List<someType> p = new List<someType>();

Nope!

But you can do this...

Type genericType = typeof(List<>).MakeGenericType(someType);

Why would I want to? Because I am creating a tool that generates plain old .NET objects from an EcoSpace's model, and I wanted to implement multi-role association ends as

public List<Building> RoleName;

rather than just

public Building[] RoleName;

Returning a binary respose in ASP MVC RC3

In a previous post I showed how to return a binary file from a controller action, well, this no longer works in release candidate 3 of the framework. Instead you have to create a new ActionResult descendant to do the job for you. This is how I did it....

return new BinaryResult(data, Path.GetFileName(productFileName));


and the class is implemented like so:

public class BinaryResult : ActionResult
{
private string ClientFileName;
private byte[] Data;
private string VirtualFileName;

public BinaryResult(string virtualFileName, string clientFileName)
{
if (string.IsNullOrEmpty(virtualFileName))
throw new ArgumentNullException("VirtualFileName");
if (string.IsNullOrEmpty(clientFileName))
throw new ArgumentNullException("ClientFileName");

ClientFileName = clientFileName;
VirtualFileName = virtualFileName;
}

public BinaryResult(byte[] data, string clientFileName)
{
if (data == null)
throw new ArgumentNullException("Data");
if (string.IsNullOrEmpty(clientFileName))
throw new ArgumentNullException("ClientFileName");

ClientFileName = clientFileName;
Data = data;
}

public override void ExecuteResult(ControllerContext context)
{
if (!string.IsNullOrEmpty(VirtualFileName))
{
string localFileName = context.HttpContext.Server.MapPath(VirtualFileName);
FileStream fileStream = new FileStream(localFileName, FileMode.Open, FileAccess.Read, FileShare.Read);
using (fileStream)
{
Data = new byte[fileStream.Length];
fileStream.Read(Data, 0, (int)fileStream.Length);
}//using fileStream
}

context.HttpContext.Response.AddHeader("content-disposition", "attachment; filename=" + ClientFileName);
context.HttpContext.Response.BinaryWrite(Data);
}
}

2008-05-01

Creating a drop shadow

Requirement: Take a PNG image that has an alpha mask and from it generate a new bitmap which has the same alpha mask but every pixel is black.

Solution:
private Bitmap GenerateDropShadowImage(Image image)
{
  Bitmap bitmap = new Bitmap(image);
  BitmapData bits = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb);

  IntPtr bottomScanLine = bits.Scan0;
  int bytesPerRow = bits.Width * 4;

  unsafe
  {
    byte* pixelValue = (byte*)bottomScanLine.ToPointer();
    for (int count = 0; count < bits.Width * bits.Height; count++)
    {
      pixelValue[0] = 0;
      pixelValue[1] = 0;
      pixelValue[2] = 0;
      pixelValue = pixelValue + 4;
    }
  }
  bitmap.UnlockBits(bits);
  return bitmap;
}