Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

2008-03-30

Postal codes within a radius

My hobby MVC website allows people to place adverts. When searching for adverts I would like the user to be able to specify a UK postal code and radius to filter the adverts down to ones within travelling distance. The trick to this was to record a list of UK postal codes and their latitude/longitude.

The first step is to write a routine which will give a straight line distance between to coordinates:

public static class MathExtender
{
  public static double GetDistanceBetweenPoints(double sourceLatitude, double sourceLongitude, double destLatitude, double destLongitude)
  {
    double theta = sourceLongitude - destLongitude;
    double distance =
      Math.Sin(DegToRad(sourceLatitude))
      * Math.Sin(DegToRad(destLatitude))
      + Math.Cos(DegToRad(sourceLatitude))
      * Math.Cos(DegToRad(destLatitude))
      * Math.Cos(DegToRad(theta));
    distance = Math.Acos(distance);
    distance = RadToDeg(distance);
    distance = distance * 60 * 1.1515;
    return (distance);
  }

  public static double DegToRad(double degrees)
  {
    return (degrees * Math.PI / 180.0);
  }

  public static double RadToDeg(double radians)
  {
    return (radians / Math.PI * 180.0);
  }
}


Additionally, on my PostalCode class I have a helper method like so:

public double GetDistanceTo(PostalCode destination)
{
  return MathExtender.GetDistanceBetweenPoints(Latitude, Longitude, destination.Latitude, destination.Longitude);
}


Now the next problem is that these routines are not selectable via SQL and therefore to find all postal codes within a radius of another I would have to load all postal codes into memory and then evaluate them, which I don’t want to do! The DB might be able to use SIN/COS/ACOS etc but if it does then it would be DB specific and I don’t want that either. My decision was to first look for postal codes within a square area, the square being just big enough to encompass the circle, and then to use the in-memory PostalCode.GetDistanceTo() method to whittle the postcodes down to the exact list of matching objects.
The problem remained that I would still need to establish whether Longitude/Latitude coordinates will be within this square. To overcome this I decided that if the world were divided up into a grid of squares each 1 mile in size I could then easily select postal codes within any size square by checking which 1 mile square it is within. So I decided to establish Longitude=0 Latitude=0 as 0,0 on my grid and work out every postal code’s distance from that point. I added two persistent "Double" properties to my postal code class named GridReferenceX and GridReferenceY which I set whenever the Longitude or Latitude is set:

public double GridReferenceX { get; private set; }
public double GridReferenceY { get; private set; }

public double Longitude
{
  get { ..... };
  set
  {
    ......;
    CalculateGridReference();
  }
}

public double Latitude
{
  get { ..... };
  set
  {
    ......;
    CalculateGridReference();
  }
}

private void CalculateGridReference()
{
  //Latitude distance only (X)
  GridReferenceX = MathExtender.GetDistanceBetweenPoints(0, 0, Latitude, 0);
  //Longitude distance only (Y)
  GridReferenceY = MathExtender.GetDistanceBetweenPoints(0, 0, 0, Longitude);
}


Now I have a grid reference for each postal code which really means nothing, but the advantage is that they can be compared to each other like so...

public IList<PostalCode> GetAllWithinRadius(PostalCode postalCode, double radiusInMiles)
{
  List<PostalCode> result = new List<PostalCode>();
  string criteria = string.Format("->select(gridReferenceX >= {0} and gridReferenceX <= {1} and gridReferenceY >= {2} and gridReferenceY <= {3})",
    postalCode.GridReferenceX - radiusInMiles,
    postalCode.GridReferenceX + radiusInMiles,
    postalCode.GridReferenceY - radiusInMiles,
    postalCode.GridReferenceY + radiusInMiles);

  IList<PostalCode> allPostalCodes = BusinessClassesHelper.SelectObjects<PostalCode>(ServiceProvider, "PostalCode", criteria);
  return
    (
      from selectedPostalCode in allPostalCodes
      where postalCode.GetDistanceTo(selectedPostalCode) <= radiusInMiles
      select selectedPostalCode
    ).ToList<PostalCode>();
}



This first uses SQL to exclude all postal codes which cannot possibly be within the radius, then a simple LINQ query returns only the postal codes that are actually within the radius of the specific postal code. Another benefit is that this will also work for postal codes anywhere in the world, I could quite easily check how far (in a straight line) it is from my UK postal code to a US zip code!

2008-03-10

Test Driven MVC and ECO

I have decided that mocking IEcoServiceProvider is not the way to go. Your controller will use the mocked provider during testing but


     
  1. You don’t want to have to mock every service the provider may return, it’s a lot of work!

  2.  
  3. You don’t want your controller using a mocked service, and then the EcoSpace using the real one!



At first I was mocking every possible service request. IUndoService, IPersistenceService, IOclService, etc. I get bored typing them out in this blog, so doing it in tests was really annoying me. I decided I would instead only mock the service in question. So if I were ensuring that an action wont save an object with broken constraints I would mock GetEcoService<IConstraintProvider> and ensure that I always got a broken constraint.

The problem was that the test to ensure I can save a valid object would then invoke the IPersistenceService.UpdateDatabaseWithList method. In my EcoSpace I have decorated my persistence service so that it checks every object in the update list to ensure it has no invalid constraints. At the point it asks for IConstraintProvider it is using the real IEcoServiceProvider and as a result it gets the real IConstraintProvider. In short the object would only save if it were really valid, and not if my mocked constraint provider pretended it was.

Ultimately I found it much easier just to register the mock service on the EcoSpace. To do this all I had to do was to expose a public method on the EcoSpace like so

#if DEBUG
 public void RegisterMockService(type serviceType, object serviceInstance)
 {
  RegisterEcoService(serviceType, serviceInstance);
 }
#endif


Now I can replace any service on the EcoSpace, so even the EcoSpace itself will get mocked services during testing. This is a lot less work!


Talking of a lot less work, although I was initially annoyed that the new field test for the ASP .NET web extensions had switched from using interfaces back over to using objects it turns out that testing is actually much easier than it was previously. Scott Hanselman posted some testing classes on his blog recently, it came out all screwy so I will repost the corrected version below (I hope he doesn’t mind). Testing is now as easy as this...

[TestClass]
public class AccountTests
{
 //This is a descendant of my real EcoSpace,
 //but I replace the persistence mapper with
 //PersistenceMapperMemory after construction so that
 //no DB access is needed.
 MemoryEcoSpace EcoSpace;
 AccountController Controller;
 MockRepository Mocks;
 FakeViewEngine FakeViewEngine;

 [TestInitialize]
 public void SetUp()
 {
  EcoSpace = new MemoryEcoSpace();
  EcoSpace.Active = true;

  Mocks = new MockRepository();
  FakeViewEngine = new FakeViewEngine();
  Controller = new AccountController();
  Controller.ViewEngine = FakeViewEngine;
  using (Mocks.Record())
  {
   Mocks.SetFakeControllerContext(Controller);
  }
 }

 [TestMethod]
 public void Create()
 {
  using (Mocks.Playback())
  {
   Controller.Create();
   Assert.AreEqual("Create", FakeViewEngine.ViewContext.ViewName);
  }
 }

 [TestMethod]
 public void CreateUpdate_ConfirmationEmailDoesNotMatch()
 {
  using (Mocks.Playback())
  {
   Controller.CreateUpdate("Mr", "Peter", "Morris", "me@home.com", "");
   Assert.IsTrue(ValidationHelper.ErrorExists(Controller.ViewData, NotASausageWebsite.Constants.ErrorMessages.ConfirmationEmailAddressDoesNotMatchEmailAddress));
  }
 }
}


public static class ValidationHelper
{
 public static bool ErrorExists(IDictionary<string, object> viewData, string errorMessage)
 {
  if (!viewData.ContainsKey(NotASausageWebsite.Constants.ViewDataKeys.Global.ErrorMessages))
   return false;
  List<string> errors = (List<string>)viewData[NotASausageWebsite.Constants.ViewDataKeys.Global.ErrorMessages];
  return errors.IndexOf(errorMessage) >= 0;
 }
}



Nice and easy! If you want the testing code from Scott here it is:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Web;
using Rhino.Mocks;
using System.Web.Mvc;
using System.Web.Routing;
using System.Collections.Specialized;

namespace Tests.Helpers
{
public static class MvcMockHelpers
{
public static HttpContextBase FakeHttpContext(this MockRepository mocks)
{
HttpContextBase context = mocks.PartialMock<HttpContextBase>();
HttpRequestBase request = mocks.PartialMock<HttpRequestBase>();
HttpResponseBase response = mocks.PartialMock<HttpResponseBase>();
HttpSessionStateBase session = mocks.PartialMock<HttpSessionStateBase>();
HttpServerUtilityBase server = mocks.PartialMock<HttpServerUtilityBase>();

SetupResult.For(context.Request).Return(request);
SetupResult.For(context.Response).Return(response);
SetupResult.For(context.Session).Return(session);
SetupResult.For(context.Server).Return(server);

mocks.Replay(context);
return context;
}

public static HttpContextBase FakeHttpContext(this MockRepository mocks, string url)
{
HttpContextBase context = FakeHttpContext(mocks);
context.Request.SetupRequestUrl(url);
return context;
}

public static void SetFakeControllerContext(this MockRepository mocks, Controller controller)
{
var httpContext = mocks.FakeHttpContext();
ControllerContext context = new ControllerContext(new RequestContext(httpContext, new RouteData()), controller);
controller.ControllerContext = context;
}

static string GetUrlFileName(string url)
{
if (url.Contains("?"))
return url.Substring(0, url.IndexOf("?"));
else
return url;
}

static NameValueCollection GetQueryStringParameters(string url)
{
if (url.Contains("?"))
{
NameValueCollection parameters = new NameValueCollection();

string[] parts = url.Split("?".ToCharArray());
string[] keys = parts[1].Split("&amp;".ToCharArray());

foreach (string key in keys)
{
string[] part = key.Split("=".ToCharArray());
parameters.Add(part[0], part[1]);
}

return parameters;
}
else
{
return null;
}
}

public static void SetHttpMethodResult(this HttpRequestBase request, string httpMethod)
{
SetupResult.For(request.HttpMethod).Return(httpMethod);
}

public static void SetupRequestUrl(this HttpRequestBase request, string url)
{
if (url == null)
throw new ArgumentNullException("url");

if (!url.StartsWith("~/"))
throw new ArgumentException("Sorry, we expect a virtual url starting with \"~/\".");

SetupResult.For(request.QueryString).Return(GetQueryStringParameters(url));
SetupResult.For(request.AppRelativeCurrentExecutionFilePath).Return(GetUrlFileName(url));
SetupResult.For(request.PathInfo).Return(string.Empty);
}

}
}



 public class FakeViewEngine : IViewEngine
 {
  public void RenderView(ViewContext viewContext)
  {
   ViewContext = viewContext;
  }

  public ViewContext ViewContext { get; private set; }

 }

2008-03-07

ECO, LINQ, Anonymous types, and Web Extensions

I’ve been finding LINQ + Anonymous types really compliment ECO and the new ASP web extensions approach to writing websites. I may have mentioned recently that I don’t like the idea of passing instances of my business objects to the presentation layer. The reason is that someone else will be writing the views for this site and I want to be able to control what they are capable of displaying. It’s not just that though, the fact is that your view might need to look completely different to how your business classes are structured, one layer should not dictate the structure of another.

The example I am about to show does in fact have similar structures for the view and model. Having said that there is a slight difference in that the MinorVersion class has its own "int VersionNumber" property, and gets the major part of the version number from self.MajorVersion.VersionNumber. Anyway, now to get on with it.

My requirement was to show all major versions, within each major version show each minor version, and within each minor version show a list of what’s new. In addition, a minor version should only be displayed if its status is #Released, and a major version should not be displayed if it has no minor versions which meet this criteria.


The following code generates a structure like so

MajorVersion (VersionNumber)
 1..* MinorVersion (MajorVersionNumber, MinorVersionNumber)
  1..* WhatsNew (ID, Headline)



and stores the resulting anonymous type into the ViewData for my view to render.

ViewData[GlobalViewDataKeys.WhatsNewKeys.WhatsNewList] = 
 from majorVersion in software.MajorVersions
 where
  (from minorVersionCheck in majorVersion.MinorVersions
   where minorVersionCheck.Status == MinorVersionStatus.Released select minorVersionCheck ).Count() > 0
 select
  new
  {
   VersionNumber = majorVersion.VersionNumber,
   MinorVersions =
    from minorVersion in majorVersion.MinorVersions
    select
     new
     {
      MajorVersionNumber = majorVersion.VersionNumber,
      VersionNumber = minorVersion.VersionNumber,
      WhatsNew =
       from whatsNew in minorVersion.WhatsNew
       select
        new
        {
         ID = whatsNew.ID,
         Headline = whatsNew.Headline
        }
     }
  };
RenderView("AllHistory");




The code behind of my view reads like this:

protected void Page_Load(object sender, EventArgs e)
{
 MajorVersionRepeater.DataSource = ViewData[GlobalViewDataKeys.WhatsNewKeys.WhatsNewList];
 MajorVersionRepeater.DataBind();
}



And finally I use nested ASP:Repeater tags to render the nested HTML.


<ul class="AllHistoryMajorVersionList">
 <asp:Repeater id="MajorVersionRepeater" runat="server">
  <ItemTemplate>
   <li>
    Major version
     <%# DataBinder.Eval(Container.DataItem, "VersionNumber") %>
    <ul class="AllHistoryMinorVersionList">
     <asp:Repeater
      id="MinorVersionRepeater"
      DataSource=’<%# DataBinder.Eval(Container.DataItem, "MinorVersions") %>’
      runat="server">
      <ItemTemplate>
       <li>
        Minor version
        <%# DataBinder.Eval(Container.DataItem, "MajorVersionNumber") %>.
        <%# DataBinder.Eval(Container.DataItem, "VersionNumber") %>
        <ul class="AllWhatsNewList">
         <asp:Repeater
          id="WhatsNewRepeater"
          DataSource=’<%# DataBinder.Eval(Container.DataItem, "WhatsNew") %>’
          runat="server">
          <ItemTemplate>
           <li>
            <a href="/WhatsNew/View/<%# DataBinder.Eval(Container.DataItem, "ID") %>">
             <%# DataBinder.Eval(Container.DataItem, "Headline") %>
            </a>
           </li>
          </ItemTemplate>
         </asp:Repeater>
        </ul>
       </li>
      </ItemTemplate>
     </asp:Repeater>
    </ul>
   </li>
  </ItemTemplate>
 </asp:Repeater>
</ul>



I think the point is that there is just no need to pass your business class instances through to the UI layer. In fact if you later changed the structure of your business classes this LINQ would no longer compile, whereas the markup in the view is evaluated at runtime so you wouldn’t spot an error here until you tried to view the page.

2008-02-22

Custom config sections

The website I am writing will sell some software I have already written. In addition it will sell "Collateral", which is basically support files for the software. The software itself will only run if it finds a license, which is an RSA signed binary file containing information such as the email address of the licensee. In addition some kinds of collateral will also be RSA signed with the licensee’s email address so that it will only work for that user, but not all collateral types are signed, for example a Character is a custom file format and is signed but a WAV file will not be signed.

So this website needs to sell software + provide a license. It also needs to sell collateral, some of which will require signing and some of which will not.

Software and Collateral are both types of Product, and you can buy a Product. The problem is how should I deal with the 3 different types of licensing (license file, signed binary, no license)? In addition to this should I really create a concrete descendant of the "Software" class just to implement a way of providing a license? Erm, no!

If I were to add a concrete class for every type of product it would mean I have to change the model for the website everytime a new product was released, what a pain! From the bottom-up this is what I did.

01: A separate assembly MyWebSite.Licensing with the following two interfaces in it (Edition is linked to Software, so we can buy a license for different editions, Std, Pro, etc):

public interface ISoftwareLicenseGenerator
{
  byte[] Generate(Edition edition, CustomerRole customer);
  string FileExtension { get; }
}

public interface ICollateralLicenseGenerator
{
  byte[] Generate(Collateral collateral, CustomerRole customer);
  string FileExtension { get; }
}


02: Another separate assembly with implementations for the interfaces in them. This assembly is then named something like "MyWebSite.Licensing.SoftwareName".

03: In the web.config file I can then add the following:

<licenseGenerators>
  <software>
    <add uniqueID="SoftwareName" type="MyNameSpace.SoftwareLicenseGeneratorClassName, NameOfAssembly"/>
  </software>
  <collateral>
    <add uniqueID="CollateralTypeName" type="MyNameSpace.CollateralLicenseGeneratorClassName, NameOfAssembly"/>
  </collateral>
</licenseGenerators>


This is parsed when the application is first run, giving me access to a list of generators available which I keep in a Dictionary<string, ......> so that I can look them up by name. If you directly add this kind of section to your web.config your app will not start, so I thought I’d show how to add custom sections + access their values from code....


01: Create a descendant of System.Configuration.ConfigurationSection

public class LicenseGeneratorsSection : ConfigurationSection
{
  public LicenseGeneratorsSection()
  {
  }

  [ConfigurationProperty("software")]
  public SoftwareLicenseGeneratorElementCollection SoftwareLicenseGenerators
  {
    get
    {
      return (SoftwareLicenseGeneratorElementCollection)this["software"];
    }
  }

  [ConfigurationProperty("collateral")]
  public CollateralLicenseGeneratorElementCollection CollateralLicenseGenerators
  {
    get
    {
      return (CollateralLicenseGeneratorElementCollection)this["collateral"];
    }
  }
}

This defines a section with no attributes + 2 element collections, one named "software" and one named "collateral". Next a class is needed for each which descends from System.Configuration.ConfigurationElementCollection so that System.Configuration knows what kind of object to add to these collections when we do <add ......./> within <software> or <collateral>.


02: Element collection, I will list the source only for one...

public class SoftwareLicenseGeneratorElementCollection : ConfigurationElementCollection
{
  //Creates a new instance for the xml <add uniqueID="X" type="Y"/>
  protected override ConfigurationElement CreateNewElement()
  {
    return new SoftwareLicenseGeneratorElement();
  }

  //Gets the key value, in this case the UniqueID property, this avoids duplicates
  protected override object GetElementKey(ConfigurationElement element)
  {
    return ((SoftwareLicenseGeneratorElement)element).UniqueID;
  }
}


03: Finally (for the config settings classes) a class is needed to define the attributes UniqueID and Type.

public class SoftwareLicenseGeneratorElement : ConfigurationElement
{
  public SoftwareLicenseGeneratorElement()
  {
  }

  [ConfigurationProperty("uniqueID", IsRequired = true, IsKey = true)]
  public string UniqueID
  {
    get { return (string)this["uniqueID"]; }
    set { this["uniqueID"] = value; }
  }

  [ConfigurationProperty("type", IsRequired = true)]
  public string Type
  {
    get { return (string)this["type"]; }
    set { this["type"] = value; }
  }
}

04: But before just putting the custom XML into web.config you need to declare this section and associate it with the new ConfigurationSection class

<configSections>
  <section name="licenseGenerators" type="MyWebsite.Configuration.LicenseGeneratorsSection, MyWebsite"/>
</configSections>

this tells ASP .NET to expect a custom section named "licenseGenerators" and to use the LicenseGeneratorsSection class in MyWebsite.dll to determine the structure of it, here it is again:


<licenseGenerators>
  <software>
    <add uniqueID="SoftwareName" type="MyNameSpace.SoftwareLicenseGeneratorClassName, NameOfAssembly"/>
  </software>
  <collateral>
    <add uniqueID="CollateralTypeName" type="MyNameSpace.CollateralLicenseGeneratorClassName, NameOfAssembly"/>
  </collateral>
</licenseGenerators>



To read the config section at runtime is really simple

LicenseGeneratorsSection licenseGeneratorsSection;
licenseGeneratorsSection =
  (LicenseGeneratorsSection)ConfigurationManager.GetSection("licenseGenerators");


Here is the entire class source for the static LicenseGenerator class

public static class LicenseGenerator
{
  static Dictionary<string, ISoftwareLicenseGenerator> SoftwareLicenseGenerators = new Dictionary<string, ISoftwareLicenseGenerator>();
  static Dictionary<string, ICollateralLicenseGenerator> CollateralLicenseGenerators = new Dictionary<string, ICollateralLicenseGenerator>();

  static LicenseGenerator()
  {
    //Read the web.config
    LicenseGeneratorsSection licenseGeneratorsSection;
    licenseGeneratorsSection =
      (LicenseGeneratorsSection)ConfigurationManager.GetSection("licenseGenerators");

    foreach (SoftwareLicenseGeneratorElement currentElement in licenseGeneratorsSection.SoftwareLicenseGenerators)
      SoftwareLicenseGenerators[currentElement.UniqueID] = (ISoftwareLicenseGenerator)GetInstance(currentElement.Type);

    foreach (CollateralLicenseGeneratorElement currentElement in licenseGeneratorsSection.CollateralLicenseGenerators)
      CollateralLicenseGenerators[currentElement.UniqueID] = (ICollateralLicenseGenerator)GetInstance(currentElement.Type);
  }

  private static object GetInstance(string typeName)
  {
    Type objectType = Type.GetType(typeName);
    return Activator.CreateInstance(objectType);
  }

  public static ICollateralLicenseGenerator GetCollateralLicenseGenerator(string uniqueID)
  {
    return CollateralLicenseGenerators[uniqueID];
  }

  public static string[] GetCollateralLicenseGeneratorIDs()
  {
    List<string> result = new List<string>();
    foreach (KeyValuePair<string, ICollateralLicenseGenerator> kvp in CollateralLicenseGenerators)
      result.Add(kvp.Key);
    result.Sort();
    return result.ToArray();
  }

  public static ISoftwareLicenseGenerator GetSoftwareLicenseGenerator(string uniqueID)
  {
    return SoftwareLicenseGenerators[uniqueID];
  }

  public static string[] GetSoftwareLicenseGeneratorIDs()
  {
    List<string> result = new List<string>();
    foreach (KeyValuePair<string, ISoftwareLicenseGenerator> kvp in SoftwareLicenseGenerators)
      result.Add(kvp.Key);
    result.Sort();
    return result.ToArray();
  }
}



This solution allows me to add new products at runtime without any problems. Any time a new type of license generator is required I just create a new assembly with a class that implements the correct interface and then register it in the web.config file!

All users

Yesterday I needed my app to read and write data from a folder to which all users have access. Having the data in the current user's data folder was unacceptible as this would have resulted in duplicate data storages, the MSI installer even generates a compiler warning telling me I shouldn’t use this folder! So I went for Environment.GetFolderPath(SpecialFolder.CommonApplicationData);

This seemed to work fine until I tested on Vista, at which point my app would "stop responding" and quit. With a bit of investigation I discovered that CommonApplicationData maps to c:\ProgramData on Vista, which to me looked good until I tried creating a read/write FileStream in that path and received an access denied exception. So, where was I supposed to store my data? Checking each of the values in the SpecialFolder enum I was surprised to see that there doesn’'t seem to be a suitable value.

So, I reflected over Environment.GetFolderPath and copied the code. I then started at 0 and worked up until I hit a path with the word "public" in it (running on Vista). The value I required was 24! I looked through the SpecialFolders enum and there was no entry for 24, what"s worse is that the GetFolderPath wont allow integer values. So, I had to reimplement it myself.

public static class SpecialFolders
{
public static string GetFolderPath(SpecialFolderEx folder)
  {
    if (!Enum.IsDefined(typeof(SpecialFolderEx), folder))
      throw new ArgumentException("Unknown folder: " + folder.ToString());

    StringBuilder lpszPath = new StringBuilder(260);
    SHGetFolderPath(IntPtr.Zero, (int)folder, IntPtr.Zero, 0, lpszPath);
    string path = lpszPath.ToString();
    new FileIOPermission(FileIOPermissionAccess.PathDiscovery, path).Demand();
    return path;
  }

  [DllImport("shfolder.dll", CharSet = CharSet.Auto)]
  private static extern int SHGetFolderPath(IntPtr hwndOwner, int nFolder, IntPtr hToken, int dwFlags, StringBuilder lpszPath);
}

public enum SpecialFolderEx
{
  ApplicationData = 0x1a,
  CommonApplicationData = 0x23,
  CommonDocuments = 0x2e,
  CommonProgramFiles = 0x2b,
  Cookies = 0x21,
  Desktop = 0,
  DesktopDirectory = 0x10,
  Favorites = 6,
  History = 0x22,
  InternetCache = 0x20,
  LocalApplicationData = 0x1c,
  MyComputer = 0x11,
  MyDocuments = 5,
  MyMusic = 13,
  MyPictures = 0x27,
  Personal = 5,
  ProgramFiles = 0x26,
  Programs = 2,
  Recent = 8,
  SendTo = 9,
  StartMenu = 11,
  Startup = 7,
  System = 0x25,
  Templates = 0x15,
  Windows = 0x24
}


If in future like me you need to store data for all users to access you should use SpecialFoldersEx.CommonDocuments, because now it works a treat!

2008-02-17

Test driven ECO

Here are my latest revelations :-)

01
Instead of having to mock IEcoServiceProvider and IOclPsService in order to avoid DB access simply use PersistenceMapperMemory. This way I can create the objects I want, UpdateDatabase, and then run my tests. It’s much easier to read, and more importantly less typing.

02
My page controllers no longer use an EcoSpace. Instead the code always uses a ServiceProvider property of type IEcoServiceProvider. When I want to test my controller I create an instance and set its ServiceProvider property. Now whenever the controller needs to do anything it will go through the ServiceProvider I specified.

This is beneficial for a number of reasons. Firstly it means that I can create an EcoSpace in my test and set its PersistenceMapper to PersistenceMapperMemory before activating it. Secondly I can also opt to pass a mocked IEcoServiceProvider which either returns the real service requested or returns a mocked one. An example of this is that I validate my page by using a registered IConstraintProvider interface (defined in DroopyEyes.Eco.Validation). I can check that a controller action wont save a modified object it if is invalid. Instead of having to know how to make the object invalid I just mock the IConstraintProvider and have it always return a single constraint with the expression "false" so that it always fails. In addition, because I know the name of the constraint that is broken, I can then check ViewData["Errors"] and ensure that the controller action has displayed the error messages.

Sure I can just write the action in a minute and know it works, but having these test cases ensures that if someone else modifies my project’s source code without fully understanding what they are doing I will know what they broke. Or, they will know what they broke and can fix it themself!

So there you are. Same end result, less code.

2008-02-14

ECO, should we mock it?

I watched a video on Rhino Mocks yesterday. What a great framework! Obviously I wanted to know if I could use this with ECO so I thought I'd give it a try.

In my website's AccountController there is a method like so

public void AttemptLogin(string emailAddress, string password, string redirectUrl)
{
}


Now I could just go ahead and write some OCL to find the user, but instead of doing this I really want to separate the code a bit. So I created a class

public class UserRepository
{
  private readonly IEcoServiceProvider ServiceProvider;

  public UserRepository(IEcoServiceProvider serviceProvider)
  {
    ServiceProvider = serviceProvider;
  }

  public User GetByEmailAddressAndPassword(string emailAddress, string password)
  {
    string searchEmail = BusinessClassesHelper.EscapeOcl(emailAddress);
    string criteria = string.Format("->select(emailAddress.sqlLikeCaseInsensitive('{0}'))", searchEmail);
    return BusinessClassesHelper.SelectFirstObject<User>(ServiceProvider, "User", criteria);
  }
}


Now I can get my user like so....

public void AttemptLogin(string emailAddress, string password, string redirectUrl)
{
  MyWebsiteEcoSpace ecoSpace = new MyWebsiteEcoSpace();
  ecoSpace.Active = true;
  try
  {
    UserRepository repository = new UserRepository(ecoSpace);
    MyWebsite.Model.User user = repository.GetByEmailAddressAndPassword(emailAddress, password);
  }
  finally
  {
    ecoSpace.Active = false;
  }
}


So what's the benefit? The important thing to note is that I pass in an instance of IEcoServiceProvider to the UserRepository object. So if I want to test the UserRepository class on its own I can pass a dummy object for the serviceProvider. This means that I don't have to access the DB which would slow things down (especially if I have to keep clearing the DB down), in fact I don't even need to connect to the DB at all!

If you remember UserRepository.GetByEmailAddressAndPassword() looks like this
public User GetByEmailAddressAndPassword(string emailAddress, string password)
{
  string searchEmail = BusinessClassesHelper.EscapeOcl(emailAddress);
  string criteria = string.Format("->select(emailAddress.sqlLikeCaseInsensitive('{0}'))", searchEmail);
 return BusinessClassesHelper.SelectFirstObject(ServiceProvider, "User", criteria);
}


and BusinessClassesHelper uses the IOclPsService and IOclService in combination to get to the result. Surely this is all too complicated to mock? Not with Rhino, no!

[TestFixture]
public class UserRepositoryTests
{
  MockRepository Mocks;
  IEcoServiceProvider MockServiceProvider;
  IOclPsService MockOclPsService;
  MyWebsiteEcoSpace EcoSpace;

  [SetUp]
  public void SetUp()
  {
    //Create a mock repository
    Mocks = new MockRepository();

    //Create the mock IEcoServiceProvider
    MockServiceProvider = Mocks.CreateMock<IEcoServiceProvider>();

    //I also need a mock IOclPsService to avoid DB access
    MockOclPsService = Mocks.CreateMock<IOclPsService>();

    //Create a transient version of my EcoSpace
    EcoSpace = new MyWebsiteEcoSpace ();
    EcoSpace.PersistenceMapper = null; //No persistence!
    EcoSpace.Active = true;
  }

  [TearDown]
  public void TearDown()
  {
    EcoSpace.Active = false;
    Mocks.ReplayAll(); //Just in case we forgot, calling twice has no effect!
    Mocks.VerifyAll(); //Ensure everything expected was called
  }

  [Test]
  public void GetUserByEmailAddressAndPassword()
  {
    //Create a list of users to return from the mock IOclPsService
    IObjectList userList = EcoSpace.VariableFactory.CreateTypedObjectList(typeof(User), false);
    
    //Add a single user to that list
    User expectedUser = new User(EcoSpace);
    expectedUser.EmailAddress = "me@home.com";
    expectedUser.SetPassword("1234567890");
    userList.Add(expectedUser.AsIObject());

    //Start specifying what we expect to be called, and what we should do as a result
    Mocks.Record();

    //When GetEcoService<IOclPsService> is called return our MockOclPsService
    Expect.Call(MockServiceProvider.GetEcoService<IOclPsService>()).Return(MockOclPsService);
  
    //Same for GetEcoService(typeof(IOclPsService))
    Expect.Call(MockServiceProvider.GetEcoService(typeof(IOclPsService))).Return(MockOclPsService);

    //When asked for the IOclService (not PS service) return the real one
    Expect.Call(MockServiceProvider.GetEcoService<IOclService>()).Return(EcoSpace.Ocl);
    Expect.Call(MockServiceProvider.GetEcoService(typeof(IOclService))).Return(EcoSpace.Ocl);

    //When MockOclPsService.Execute is executed return our userList
    Expect.Call(MockOclPsService.Execute(null)).Return(userList);
    //This means we don't care what the exact parameter is, any OCL will do
    LastCall.IgnoreArguments();

    //Now go into play back mode
    Mocks.ReplayAll();

    //Create the UserRepository using our mock services
    UserRepository repository = new UserRepository(MockServiceProvider);

    //Ask for the user
    User foundUser = repository.GetByEmailAddressAndPassword(expectedUser.EmailAddress, "1234567890");

    //Ensure that we got the same user back
    Assert.AreEqual(expectedUser, foundUser, "Found the wrong user");
  }
}



Nice eh :-)

2008-02-13

Unit testing MonoRail controllers

I spent yesterday finishing off (mostly) my business model, then the end of yesterday + today writing test cases for those classes. Everything was going great, I found at least 3 errors in my code that I hadn’t realised was there and also realised there were a few more things I needed.

Then it was time to start testing the controllers in my MonoRail site. What a disaster!

Attempt 1:
[Test]
public void AdminOnly_Home()
{
  AdminController controller = new AdminController();
  controller.Home();
  Assert.IsTrue(Controller.Response.WasRedirected, "Should have been redirected");
}


The problem with this was pretty obvious, Controller doesn’t have a Response etc set up. So along came attempt 2:

[Test]
public void AdminOnly_Home()
{
  AdminController controller = new AdminController();
  PrepareController(controller);
  controller.Home();
  Assert.IsTrue(Controller.Response.WasRedirected, "Should have been redirected");
}


Now the controller is set up with mock objects and will run! Unfortunately the BeforeAction filter on my action was not being executed. Aha! Pretty obvious problem! If I call the method directly how can the framework possible find all of the reflection attributes and process them etc? *slaps head*

Attempt 3
[Test]
public void AdminOnly_Home()
{
  AdminController controller = new AdminController();
  PrepareController(controller, "Admin", "Home");
  controller.Process(Controller.Context, Controller.ControllerContext);
  Assert.IsTrue(Controller.Response.WasRedirected, "Should have been redirected");
}


Still no joy! The filters just aren’t being executed. Someone on the user groups said that this is expected behaviour and that the filter should be tested in isolation. Whereas I agree for the most part unfortunately it doesn’t apply in this case. My ECO extensions to MonoRail allow the developer to specify pooling, session, default EcoSpace types, and so on. If these reflection attributes aren’t processed then the action just isn’t going to act in the same way it will at runtime!

At the moment I am sorely disappointed! I was really looking forward to writing a test driven website but unless this guy was wrong it doesn’t look like it is going to be possible!

It’s at times like these I wonder how difficult it really is to write your own MVC framework? Maybe I will take another look at the MS offering. If I had enough free time I'd make my own :-)

2008-02-09

Validation

I have a model like so

Product 1----* ProductVersion
ProductVersion 1----* ProductEdition

ProductVersion can been in one of two states: UnderDevelopment / Released

ProductEdition has a DownloadUrl:string attribute which is only required if self.version.status = #Released


The validation for ProductEdition works perfectly, I cannot leave the DownloadUrl blank if the ProductVersion has already been released. Unfortunately when I already have a number of ProductEdition

instances with no DownloadUrl and then make my Productversion live the editions are not validated because they are not dirty. So I needed some way to ensure that when ProductVersion is validated all related

ProductEdition instances are also validated.

Step 01: Add a way to allow ProductVersion to identify other objects to be validated.

In the business classes project I added the following interface.

public interface IValidationExtender
{
  IEnumerable GetConstraintedObjects();
}


My ProductVersion can do this

IEnumerable IValidationExtender.GetConstraintedObjects()
{
  List result = new List();
  foreach (IObject currentEdition in Editions)
    result.Add(currentEdition.AsIObject());
  return result;
}



Step 02: Create a validation service which validates all objects : Only implemented methods are shown

public class ExtendedConstraintProvider : IConstraintProvider
{
 private IConstraintProvider ModeledConstraintProvider;

 public void GetConstraintsForObject(IObject instance, List constraints)
 {
    if (instance == null)
      throw new ArgumentNullException("instance");

    //Deletegate to GetConstraintsForObjects
    GetConstraintsForObjects((IObjectList)instance.GetAsCollection(), constraints);
  }

  public void GetConstraintsForObjects(IObjectList objectList, List constraints)
  {
    if (objectList == null)
      throw new ArgumentNullException("objectList");
    if (objectList.Count == 0)
      return;

    //Get all constrained objects
    Dictionary includedObjects = new Dictionary();
    foreach (IObject currentObject in objectList)
      RecursiveGetExtendedObjects(currentObject, includedObjects);

    //Add the objects to a list
    IObjectList newInstances = EcoServiceHelper.GetVariableFactoryService(objectList[0]).CreateUntypedObjectList(true);
    foreach (KeyValuePair kvp in includedObjects)
      newInstances.Add(kvp.Key);

    //Return the constraints from ModeledConstraintProvider
    ModeledConstraintProvider.GetConstraintsForObjects(newInstances, constraints);
  }

  private void RecursiveGetExtendedObjects(IObject currentObject, Dictionary includedObjects)
  {
    //Don't process the same object twice
    if (includedObjects.ContainsKey(currentObject))
      return;

    includedObjects.Add(currentObject, null);

    //If the class implements IValidationExtender then add its constrained objects
    IValidationExtender extender = currentObject.AsObject as IValidationExtender;
    if (extender != null)
    {
      foreach (IObject dependentObject in extender.GetConstraintedObjects())
        RecursiveGetExtendedObjects(dependentObject, includedObjects);
    }
  }
}




Step 03: Register the service in the EcoSpace

public InteevoWebsiteEcoSpace(): base()
{
  InitializeComponent();
  RegisterEcoService(typeof(IConstraintProvider), new ExtendedConstraintProvider());
}



Now I can validate a list of dirty objects using EcoSpace.GetEcoService().GetConstraintsForObjects.


Step 04: Last point of defence, ensure that no invalid objects may be saved. Only relevant methods are shown.

internal class ValidatingPersistenceService : IPersistenceService
{
  private IPersistenceService Inner;
  private IEcoServiceProvider ServiceProvider;

  internal ValidatingPersistenceService(IEcoServiceProvider serviceProvider)
  {
    if (serviceProvider == null)
      throw new ArgumentNullException("serviceProvider");

    ServiceProvider = serviceProvider;
    Inner = ServiceProvider.GetEcoService();
    if (Inner == null)
      throw new ArgumentException("ServiceProvider did not provide an instance for IPersistenceService");
  }

  private IConstraintProvider constraintProvider;
  private IConstraintProvider ConstraintProvider
  {
    get
    {
      if (constraintProvider == null)
      {
        constraintProvider = ServiceProvider.GetEcoService();
        if (constraintProvider == null)
          throw new InvalidOperationException("IConstraintProvider not registered as an ECO service");
      }
      return constraintProvider;
    }
  }



  void IPersistenceService.UpdateDatabaseWithList(IObjectList list)
  {
    ValidateObjects(list);
    Inner.UpdateDatabaseWithList(list);
  }

  private void ValidateObjects(IObjectList objects)
  {
    List constraints = new List();
    ConstraintProvider.GetConstraintsForObjects(objects, constraints);
    foreach (DroopyEyes.EcoExtensions.Validation.IConstraint currentConstraint in constraints)
    {
      if (!currentConstraint.IsValid)
      {
        throw new InvalidOperationException(
          string.Format("Cannot update database with invalid objects:\r\n{0} : {1}",
            currentConstraint.Instance.UmlClass.Name, currentConstraint.Name)
        );
      }
    }//foreach constraint
  }
}



Step 05: Replace the standard IPersistenceService in the EcoSpace

public InteevoWebsiteEcoSpace(): base()
{
  InitializeComponent();
  RegisterEcoService(typeof(IPersistenceService), new ValidatingPersistenceService(this));
  RegisterEcoService(typeof(IConstraintProvider), new ExtendedConstraintProvider());
}



Finally I have an implementation which does the following

A: Allows me to get constraints for dirty objects + all relevant objects
B: Prevents the app from saving objects with broken constraints.

2008-02-07

EcoRail

The whole idea of having a controller and a view is so that the view renders only exactly what it is given, and the controller is able to give it whatever data it likes from wherever it needs to obtain it.

After working with ECO and Monorail for a while it has been a real pleasure, but I am starting to think that maybe exposing ECO objects directly to the view is not the right approach.

If for example I put an Employee into the PropertyBag the view can easily display $Employee.Salary. This might not be a problem when you develop both the controllers and the view but in my case someone else will ultimately create the views. Do I really want them to be able to have access to this information? In addition, what if the view engine they use has a scripting language that is able to set values? Setting $Employee will merely set the PropertyBag["Employee"] value, but setting $Employee.Salary could see a certain view developer buying a new car next month.

I am very tempted to change the site whilst it is in its early stages of development. It does seem more logical to have small chunks of data or small classes to pass back and forth between the controller and the view. This is more in line with the design I have in my PocketPC application.

If that is the case it will probably mean that EcoRails is redundant! Actually, only the EcoDataBind part would really be redundant I think, the rest would still be quite useful!

Your bug is my bug

I recently released an update to some software and a bug slipped through the net. It introduced some odd behaviour with a control named SmartGrid. After some testing I was able to determine that it wasn't my fault and that I could reproduce a bug in SmartGrid. I hate bugs in other people's source code, I can't fix it, I am at their complete mercy.

Thankfully the Resco support was amazing! I posted on their forums and immediately someone sent me instructions on where to send my project. The next morning I was disappointed to see an email saying that the project worked fine. I posted again and almost immediately someone had offered to chat on skype.

We did that for a while, both confused by the problem. We then went on to use Remote Assistance so that he could observe my bug which he wasn't experiencing.

In the end the problem was very confusing. I had Version A of the DLL in which the error occurred. I upgraded to the latest version (B) and it still occurred. The guy at Resco sent me a DLL with debug strings being sent to the IDE (C.DEBUG) and everything worked. I reverted to Version B and now it worked whereas before it didn't. Don't you just hate phantom bugs that "fix" themselves?

In the end the Resco guy sent me a full build of the very latest code base (Version C) and all seems fine.

Both of us were at a complete loss, but thanks to their excellent support the problem with my application was kept to as short a time as possible!

2008-02-05

MaxLength

Implementing HTML maxlength was a bit of a pain. Not to write the helpers though, that was easy....

$EcoModelHelper.AttributeLength($Product, "ID")


But when it came to specifying that in the <input> it was too much work! This is how it is done statically...

$FormHelper.TextFieldValue("Product.ID", $Product.ID, "%{maxlength='32'}")


Now I had to replace the static 32 with the EcoModelHelper code.

#set ($ProductIDLength = $EcoModelHelper.AttributeLength($Product, "ID"))
$FormHelper.TextFieldValue("Product.ID", $Product.ID, "%{maxlength='$ProductIDLength'}")


This was starting to look like too much typing!

So instead I have decided to add new methods to the EcoFormHelper. Here is the first:

$EcoFormHelper.ObjectTextField("Product.ID", $Product, "ID")


This will output something like this

<input type="text" id="Product_ID" name="Product.ID" value="AlterEgo" maxlength="32" />

It just uses the normal MonoRail $FormHelper.TextFieldValue helper but passes it the current value of the object and the maximum length as defined in the model

More work up front, less in the long run :-)

EcoRail validation

Here is yesterday's update.

I wanted a way to validate the user input. Seeing as there are constraints in the model to me this was the obvious approach to take. The HTML in my main layout (MasterPage) was changed like so

<body>
  #if ($Errors && $Errors.Count > 0)
    <ul class="errors">
      #foreach ($currentError in $Errors)
        <li>$currentError</li>
      #end
    </ul>
  #end

  $childContent

</body>


This outputs all errors passed in PropertyBag["Errors"] or in my case I used Flash["Errors"].


To validate my product input I changed my controller like so:

[AllowEcoSpaceDeactivateDirty(true)]
public void Modify([EcoDataBind("Product", Allow = "ID,Name", NoObjectIdAction = ObjectIdAction.CreateNewInstance)]Product product)
{
  PropertyBag["Product"] = product;
  IList<string> errors = GetErrorsForAllDirtyObjects();
  if (errors.Count > 0)
    Flash["Errors"] = errors;
  else
  {
    EcoSpace.UpdateDatabase();
    RedirectToAction("List");
  }
}


GetErrorsForAllDirtyObjects uses the DefaultEcoSpaceType to find the EcoSpace instance and then checks all constraints of all dirty objects in order to return a list of strings. Available validation routines are


protected IList<string> GetErrorsForObject(IObjectProvider instance)

Gets error messages for broken constraints on a single object


protected IList<string> GetErrorsForAllDirtyObjects(Type ecoSpaceType)

Gets the EcoSpace instance of the type specified and then returns errors messages for broken constraints on all modified objects


protected IList<string> GetErrorsForAllDirtyObjects()

Calls GetErrorsForAllDirtyObjects(Type ecoSpaceType) using the DefaultEcoSpaceType specified



Now I have to take into account that not everyone wants to have their error messages returned from OCL constraints defined in the model. To cater for this my validation routines do not directly read the model, instead they use a virtual property

private IConstraintProvider m_ConstraintProvider;
protected virtual IConstraintProvider ConstraintProvider
{
  get
  {
    if (m_ConstraintProvider == null)
      m_ConstraintProvider = new ModeledConstraintProvider();
    return m_ConstraintProvider;
  }
}


The default implementation returns an instance of ModeledConstraintProvider which is a class in the DroopyEyes.Eco.Extensions project, but you can now override this property on your controller and return any implementation you like.

So now I have OCL validation from the model. Next I think I will add an EcoModelHelper so that you can obtain information from the model, to start with I think all I will implement is something like the following

$EcoModelHelper.Length("Person", "FirstName")

2008-01-30

MonoRail

I'm working on a new website for work. I've decided to use ECO for the business model due to how much time it saves me. I took a look at the new MVC ASP approach provided by Microsoft recently and was a bit disappointed. There were bugs in some pretty basic errors that would have been an annoyance to code around, and it just didn't feel "ready".

So, I've decided to take another look at MonoRail. I'd already written an ECO implementation for MR in the past but I decided to start the implementation from scratch. This was mainly inspired by the new EcoSpaceManager in ECOIV for ASP .NET. Using an EcoSpaceManager you can easily utilise many instances of different types of EcoSpace in the same page. I decided I would do the same.

Unlike the EcoSpaceManager I haven't gone for unique string values for identifying the EcoSpace instance I want. That approach is good in ASP .NET where you want to bind different components together to generate your HTML but it doesn't really make as much sense when everything you produce is written in code. If you want two instances of the same EcoSpace you can just use EcoSpaceStrategyHandler.GetEcoSpace().

Anyway, on to the detail:

The first thing I have done is to specify EcoSpace and EcoSpaceProvider settings as reflection attributes on the class (controller) and method (action). Like so

[UseEcoSpacePool(true)]
public class AccountController: EcoSmartDispatcherController
{
  public void Index()
  {
  }

  [UseEcoSpacePool(false)]
  public void SomethingElse()
  {
  }
}
In this example the EcoSpace pool will be used for all actions (methods) except SomethingElse which explicitly says not to use it.

If you want to specify EcoSpace type specific settings in your web app then this is possible too:

[UseEcoSpacePool(true)]
[UseEcoSpacePool(typeof(MyEcoSpace), false)]
public class AccountController: EcoSmartDispatcherController
{
  public void Index()
  {
  }

  [UseEcoSpacePool(typeof(MyEcoSpace), true)]
  public void SomethingElse()
  {
  }
}
In this example the default is to use the EcoSpace pool unless you are retrieving an instance of MyEcoSpace in which case it wont be used. However, if the action being invoked is SomethingElse() then retrieving an instance of MyEcoSpace will use the pool. The order of priority is as follows:
  1. Apply settings on the class that are not specific to an EcoSpace type.
  2. Apply settings on the method that are not specific to an EcoSpace type.
  3. Apply settings on the class that are specific to an EcoSpace type.
  4. Apply settings on the method that are specific to an EcoSpace type.
So the order of priority is that Method settings override Class settings, and then EcoSpace type specific settings override non specific settings.

Pretty versatile eh? But how do you get an instance of an EcoSpace? Well, that's pretty simple too!
public void Index()
{
  MyEcoSpace myEcoSpace = GetEcoSpace<MyEcoSpace>();
  //This will return the same instance
  MyEcoSpace myEcoSpace2 = GetEcoSpace<MyEcoSpace>();
}
Any EcoSpaces requested in this manner will automatically be "released" immediately after the method finishes executing so you don't need to worry about it at all. I say "released" because it will either be Disposed, stuffed into the session, or returned to the pool based on your EcoSpaceProvider settings for this class/method/EcoSpace type.

That's not the end of the story though. You can bind instances of your objects directly to HTML and back. To do this you need to identify your business class with a reflection attribute "EcoDataBind", like so
[AllowDeactivateDirty(true)]
public void Create(
  [EcoDataBind(typeof(MyEcoSpace), "Product", CreateIfNoObjectId=true)]Product product)
{
  PropertyBag["Product"] = product;
}
Here I have stated that it is okay to Dispose the EcoSpace instance if it contains dirty objects at the end of this method. The method itself consists of a single parameter named "product" of type "Product". The EcoDataBind might look a bit overwhelming but it says this
  1. The EcoSpace type you need to home the Customer object is MyEcoSpace.
  2. The prefix in the form to look for is "Product", so when you see <input name="Product.Name">> MonoRail knows that the value should go into the Name property of this product.
  3. If there is no ExternalId in the form identifying the object to fetch from the data storage before updating then a new instance should be created.
As a result when you run your app and go to localhost/account/create you will see that the product method has a new instance in there. When the user posts the form back you will again see an instance but containing the updated values. What does the HTML look like? I have used the Brail view engine and HTML helpers to output the HTMl I need. This allows you to use whatever HTML you like but then easily add the <input> etc based on your current object.

${HtmlHelper.Form('create.rails')}

  ${EcoFormHelper.PersistedExternalId('Product.ExternalId', Product)}

  Name : ${HtmlHelper.InputText('Product.Name', Product.Name)}

  Current version number : ${HtmlHelper.InputText('Product.CurrentVersionNumber', Product.CurrentVersionNumber)}

  ${HtmlHelper.SubmitButton('Save')}

${HtmlHelper.EndForm()}
I have split the lines up a bit to make them easier to visually separate.

First an EcoFormHelper is told to output a hidden input named Product.ExternalId for the product we set in the C# method (see PropertyBag["Product"] = product). EcoFormHelper.ExternalId will output the ExternalId for the object, PersistedExternalId will only output if the object is not new, this is useful in situations like this when the object was disposed of with the EcoSpace it belong too and we can just create a new instance.

Next the HtmlHelper gives us an <input> named "Product.Name" and its value is set to whatever is in the Product's Name property. The same is done for CurrentVersion.

A Submit button is then generated so that the user may post their changes.

Summary
Well, this little example shows that I can implement a nice clean MVC style approach to writing web apps with ECO and not have to worry about constructing EcoSpace instances in code, fetching objects from the data storage and so on manually; everything is done for me.

2008-01-24

Single application instance

I needed my app to be a single-instance app. It was easy to implement, like so:

bool mutexIsNew;
Mutex mutex = new Mutex(true, "SomeUniqueID", out mutexIsNew);
if (mutexIsNew)
Application.Run(new MainForm());


A problem arose though when my app started to need to receive command line arguments. How do you also pass those onto the original app? There is a class for this purpose named
"WindowsFormsApplicationBase", it's in Microsoft.VisualBasic.dll

01: Add a class to your project like so:
internal class SingleInstanceApplication : WindowsFormsApplicationBase
{
private MainForm MainFormInstance;
internal SingleInstanceApplication()
{
IsSingleInstance = true;
EnableVisualStyles = true;
MainFormInstance = new MainForm();
MainForm = MainFormInstance;
}

protected override bool OnStartup(StartupEventArgs eventArgs)
{
return base.OnStartup(eventArgs);
MainFormInstance.AcceptCommandArguments(eventArgs.CommandLine);
}

protected override void OnStartupNextInstance(StartupNextInstanceEventArgs eventArgs)
{
base.OnStartupNextInstance(eventArgs);
MainFormInstance.AcceptCommandArguments(eventArgs.CommandLine);
}
}


02: Add a method to your MainForm class like so:
internal void AcceptCommandArguments(IList args)
{
}


03: Finally change your project source code like so:

[STAThread]
static void Main(string[] args)
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
SingleInstanceApplication singleInstanceApplication = new SingleInstanceApplication();
singleInstanceApplication.Run(args);
}

2008-01-17

RSA signed streams

I'm writing an app that needs me to be able to write to a file and make sure that nobody changes it. The idea that I had was to create 2 classes:

  1. WritableSignedStream:

    Accepts a targetStream parameter + a privateKeyXml string. I add some space at the beginning of the target stream to reserve it for header info. You then write to the stream as normal (it reports a smaller Length than that of the target stream so your app is unaware of the header). When you close the stream it writes the public key to the stream + an RSA encrypted hash of the data part of the file.
  2. ReadableSignedStream:

    Accepts a sourceStream parameter in the constructor. Just before you read for the first time it will compute an MD5 hash of the data part of the file, and then compare it with the signed hash in the file (which I first decrypt using the stored public key).

These classes therefore provide two functions:

  • You can sign a stream, the app can check the stream was signed by you by inspecting the public key within it.
  • You can additionally pass a publicKeyXml to the ReadableSignedStream constructor and it will use that instead of the public key within the outer stream, this allows you to ensure nobody has altered your file in any way.

Anyway, to the point. I could always use RsaCryptoServiceProvider.Encrypt() to encrypt the MD5 hash, but whenever I tried to DeCrypt() I would get an exception telling me that my public key was a "Bad key".

It annoyed me for a while, but then I realised something pretty obvious really. The way RSA works is that you are supposed to Encrypt using the Public key and DeCrypt using the Private key. I was using it the wrong way around. The solution was simple....

(Signing)
RSACryptoServiceProvider RsaProvider;
RsaProvider = new RSACryptoServiceProvider();
RsaProvider.FromXmlString(privateKeyXml);

MD5 md5 = new MD5CryptoServiceProvider();
byte[] md5HashBuffer = md5.ComputeHash(dataStream);
byte[] signedMD5HashBuffer = GetSignedMd5Hash(md5HashBuffer);


private byte[] GetSignedMd5Hash(byte[] hashBuffer)
{
RSAPKCS1SignatureFormatter rsaFormatter = new RSAPKCS1SignatureFormatter();
rsaFormatter.SetKey(RsaProvider);
rsaFormatter.SetHashAlgorithm("MD5");
return rsaFormatter.CreateSignature(hashBuffer);
}


(Verifying)
MD5 md5 = new MD5CryptoServiceProvider();
byte[] currentMD5HashBufferForFile = md5.ComputeHash(dataStream);
RSACryptoServiceProvider rsaProvider = new RSACryptoServiceProvider();
rsaProvider.FromXmlString(PublicKeyXml);


RSAPKCS1SignatureDeformatter rsaDeFormatter = new RSAPKCS1SignatureDeformatter();
rsaDeFormatter.SetKey(rsaProvider);
rsaDeFormatter.SetHashAlgorithm("MD5");
if (!rsaDeFormatter.VerifySignature(currentMD5HashBufferForFile, FileEncryptedMd5Hash))
throw new Exception("Someone has messed with the file");

I hope someone finds this useful :-)

No symbols loaded

This has been driving me mad for hours now! Whenever I run my PocketPC compact framework app I cannot debug it! None of the breakpoints will stop, each breakpoint just shows as an empty circle instead of a solid one.

So, what was the solution? I tried deleting all PDB files on my hard disk but that didn't do it. In the end manually deleting all of the files previously deployed to my PPC did the trick. Maybe VS couldn't overwrite them or something? No idea, but at least it works now :-)

2007-09-29

Derived properties that you don't want cached

I'm rewriting a Win32 application using ECO. One of the things this application allows the user to do is to specify their preferred unit of measurement for height, distance, depth, weight, and speed.

To achieve this I have an ECO class ApplicationSettings which is a singleton which identifies the units to use. I then have 2 properties for each value....

InternalLandingDistance
DisplayLandingDistance

The idea is that I should never expose Internal* properties in the GUI but display their corresponding Display* property instead. The Display* property will read/write the Internal* property and perform some kind of conversion on it.

If I always store my values using the unit with the highest accuracy (Distance = feet, Depth = millimetres, Weight = Pounds, Speed = KPH) then I just need to convert them by the correct factor when reading/writing from the Display* properties.

This isn't rocket science, obviously I can just implement these as reverse derived attributes right? The problem with this is that I am using a new EcoSpace per form, so if someone changes the ApplicationSettings.DistanceMeasurement in one EcoSpace my derived member has no way of knowing unless I want to put in synchronisation.

Well, my app can work stand-alone or on a network. I don't want to over complicate things, so what I decided I needed was a reverse derived property with no subscriptions + caching, but that just isn't possible.

Well, actually, it is. It's just so obvious! Instead of marking my members Derived I marked them as Transient and HasUserCode. I implement the code like so.....


public float DisplayLandingDistance
{
get
{
return ConvertDistanceToDisplay(InternalLandingDistance);
#if NeverDoThis
{EcoModeler generated code here}
#endif
}
set
{
InternalLandingDistance = ConvertDistanceToInternal(value);
#if NeverDoThis
{EcoModeler generated code here}
#endif
}
}



Now my values will be calculated every time they are read instead of being cached. The "#if NeverDoThis" bit is there so that EcoModeler will always put its auto-generated accessor code within a region that never executes. I could just put a "return" above it, but I don't like those "Unreachable code detected" warnings!

2007-08-31

RetrieveChanges(out ignoredChanges)

Here's another quick blog about a new ECO IV feature. I asked for this during development because I had a particular problem I had to solve.

My new application has a class named "PlannedCall". When the user logs in they see a list of planned calls that are
A: Active
B: Not already actioned
C: Assigned to the current user, or not assigned to anyone
D: EarliestCallTime <= Today + 1

As time goes on there will be a lot of instances of this class in my DB, so obviously I want to use the OclPSHandle to select my objects otherwise I will end up with a whole load of objects in memory that I do not need. However, when another user actions the planned call the Actioned property becomes true. If I use an ExpressionHandle then this planned call would disappear from the presented list, but with OclPSHandle it will not. This is where the new feature comes in!

When you call PersistenceService.RetrieveChanges a collection of DBChange will be sent to the client from the server. The EcoSpace will inspect this list and create an IChange each time the DBChange refers to an object instance loaded into the EcoSpace cache.

In the past the DBChanges that do not apply were discarded, but now you can get hold of them. Take a look at this:


DBChangeCollection ignoredChanges;
EcoSpace.Persistence.RetrieveChanges(ignoredChanges);

//Do what you normally do with the IChange list here!

//Now check if there are relevant changes to objects we have not loaded
int plannedCallClassId =
EcoSpace.TypeSystem.GetClassByType(typeof(PlannedCall)).InternalIndex;

foreach(DBChange currentChange in ignoredChanges)
if (currentChange.ObjectId.ClassId = plannedCallClassId)
{
SelectPlannedCalls();
break;
}


SelectPlanendCalls will just execute an OclPsHandle...

sqlPlannedCalls.Expression = "PlannedCall.allInstances->select(...whatever...)";
sqlPlannedCalls.Execute();


This will only load objects that meet the correct criteria.

Finally, my expression handle can work on PlannedCall.allLoadedObjects instead of PlannedCall.allInstances - something like this


PlannedCall.allLoadedObjects
->select(active)
->select(not actioned)
->select( (assignedTo->isEmpty) or (assignedTo = var_CurrentUser) )
->select(earliestCallTime <= DateTime.Now.AddDays(1))


So now I have the benefit of being able to limit which objects I use by utilising the OclPsHandle, but also the benefit of the list being subscribed to so that changes to my local cache will add/remove rows in the grid displaying the planned calls + the list is updated if someone else updates a planned call.

I've just tested it in my application. I'm so pleased that I felt compelled to tell the world :-)

2007-08-21

More secure passwords

Storing a plain-text password in a DB leaves your system open to abuse from anybody with access to the tables (sys admin for example). Here is the technique I recently used. Instead of storing the password itself I store a hash of the password, and a random "salt" that was used to create the hash to make it less predictable.


[UmlTaggedValue("Eco.Length", "255")]
private string PasswordHash
{
get
{
...
}
set
{
...
}
}

[UmlTaggedValue("Eco.AllowNULL", "True")]
[UmlTaggedValue("Eco.Length", "40")]
private string PasswordSalt
{
get
{
...
}
set
{
...
}
}



As you can see I do not store the password, just a hash + salt, both of
which are private. Here are the methods used to set the values of these properties.

public void SetPassword(string newPassword)
{
if (newPassword == null)
throw new ArgumentNullException("password");
if (newPassword.Length < 6)
throw new ArgumentException("Password must be at least 6 characters");

PasswordSalt = Guid.NewGuid().ToString();
PasswordHash = EncryptPassword(newPassword);
}

private string EncryptPassword(string password)
{
TripleDESCryptoServiceProvider des;
MD5CryptoServiceProvider hashmd5;
byte[] pwdhash, buff;
hashmd5 = new MD5CryptoServiceProvider();
pwdhash = hashmd5.ComputeHash(ASCIIEncoding.ASCII.GetBytes(PasswordSalt));
hashmd5 = null;
des = new TripleDESCryptoServiceProvider();
des.Key = pwdhash;
des.Mode = CipherMode.ECB; //CBC, CFB
buff = ASCIIEncoding.ASCII.GetBytes(password);
string result =
Convert.ToBase64String(
des.CreateEncryptor().TransformFinalBlock(
buff, 0, buff.Length)
);
return result;
}


Now the code required to get a user by ID/Password

public static User FindUserByUserIDAndPassword(IEcoServiceProvider serviceProvider, 
string userID, string password)
{
string query =
string.Format("->select(userID.sqlLikeCaseInsensitive('{0}'))",
BusinessClassesHelper.EscapeOcl(userID));

User user =
BusinessClassesHelper.SelectFirstObject(serviceProvider, "User", query);

if (user != null && user.EncryptPassword(password) == user.PasswordHash)
return user;
return null;
}


The BusinessClassesHelper is a class with static methods to help with OCL operations, such as escaping user input values to avoid OCL injection.

using System;
using System.Collections.Generic;
using System.Text;
using Eco.ObjectRepresentation;
using Eco.Services;
using Eco.Handles;

namespace xxxxxxxxxxxxxxx
{
public static class BusinessClassesHelper
{
public static string EscapeOcl(string ocl)
{
if (string.IsNullOrEmpty(ocl))
throw new ArgumentNullException("ocl");

ocl = ocl.Replace("\\", "\\\\");
ocl = ocl.Replace("'", "\\'");
return ocl;
}

public static T SelectFirstObject(IEcoServiceProvider serviceProvider,
string className, string criteria)
{
if (serviceProvider == null)
throw new ArgumentNullException("serviceProvider");
if (string.IsNullOrEmpty(className))
throw new ArgumentNullException("className");
if (criteria == null)
throw new ArgumentNullException("criteria");

IOclPsService psService =
EcoServiceHelper.GetOclPsService(serviceProvider);
IOclService oclService =
EcoServiceHelper.GetOclService(serviceProvider);

IObjectList result =
psService.Execute(className + ".allInstances" + criteria);
result =
oclService.Evaluate(className + ".allLoadedObjects" + criteria) as IObjectList;
if (result.Count == 0)
return default(T);
else
return (T)result[0].AsObject;
}
}
}