2006-12-27

Delegates

I just remembered another technique for calling methods discovered via reflection! It is possible to convert a MethodInfo to a delegate and call it, this is just as fast as calling the method directly. So if you have a method that you call often which was discovered via reflection you should try this....

private delegate bool DoSomethingDelegate(object a, object b);

MethodInfo methodInfo = type.GetMethod(.........);
DoSomethingDelegate method =
(DoSomethingDelegate)
Delegate.CreateDelegate(typeof(DoSomethingDelegate), methodInfo);

for (int i = 0; i < 1000000, i++)
method(this, this);

2006-12-26

Onion, more on signals

I've been trying to think through every type of feature I can that people might want in an application, and then seeing if I can think up a way of implementing it using signals. A couple of days ago I thought up something I couldn't implement using the exact approach I had.

I once wrote an app that had a treeview on the left hand side, the user could drill down to a customer of their choice and then when they clicked on that customer the display would update and show their details. This made me realise that a simple "SelectCustomer" signal would not be sufficient, what I actually needed to do was to have multiple SelectCustomer signals and indicate which customer to select.

The changes I have decided to make are as follows:
  1. I will introduce a struct called SignalCreateParameters. This will hold a string property "Parameters" which may be used to automatically populate some of the properties of the created signal, ie the customer identity.
  2. The signal factory will first be asked to populate a list of SignalCreateParameters, and then ultimately the target will be provided the opportunity of modifing that list, so that it may add additional items, remove them etc.
  3. Neither the signal factory or target will have GetSignalPermission. Instead the SignalCreateParameters will have a SignalPermission property so that individual customers may be disabled, removed etc within the list.
I've also decided that I really don't like having an ISignalFactory. I really don't like having to write two classes each time I want to add a single new "thing". Having to write two classes for each new feature is prone to error in my opinion, so I've been looking at using reflection again.

I originally used reflection but decided against it because it required the developer to
  1. Identify that the class is a signal (interface or attribute)
  2. Implement certain methods as static methods with a specific name + parameter list.
I went for the SignalFactory class approach so that I could create a base class and allow the developer to inherit from it, this would make it very clear which methods needed to be implemented. However I think that the penalty of developing two classes each time outweighs the time it takes to learn the method signatures required (1 constructor + 1 optional method).

So I think I will now remove the ISignalFactory part of the framework. The only thing I really need to consider is performance. On my 2.Ghz laptop executing a static method directly 1 million times takes 31 milliseconds, whereas via reflection it takes 2,109 milliseconds. This would mean that if there were 1,000 signals and 1,000 simultaneous requests the process would take 2.1 seconds per user. I don't suppose that is too bad really, if anyone has anything further they think I should consider then I'd like to hear from you. Just write to my droopyeyes.com email address (support@)

2006-12-20

Onion, part 3

Women can multitask

No matter how many times you might be told "women can multi-task!" it's just not true, humans can only do one thing at a time. I don’t doubt for a second that my wife's brain can keep track of multiple subjects much better than my single task brain, but at any one point her brain is only concentrating on a single task!

It's exactly the same for software. People may wish to deviate from their current task in order to fulfil some adhoc requirement, but that task is an interruption, it does not occur in parallel to what they were doing before. Once that interruption is over the user of your software wants to pick up where they left off. This is what a process driven (or "task oriented") approach to writing software is about.

Process driven work flow

A process in this context is a single task performed within an application in order to achieve a specific goal. The goal may be just about anything such as "Delete customer", "Print invoice" etc. The process may involve only a single step
  1. Are you sure you wish to delete this customer?
or multiple steps
  1. Select invoice
  2. Select action
  3. Are you sure you want re-print this invoice?
and at various steps,but not necessarily every step, the user may be required to provide some kind of input in order for the process to continue.

During its operation the process may call open the functionality of one or more other processes and act accoring to whether the process completed successfully or was cancelled, after the spawned process has terminated control should be handed back to the original process.

To achieve this we will require some kind of first-on-last-off (FILO) stack of processes. When a process is executed it is added to the stack and becomes the "active" process, when a process is completed or cancelled it is removed from the stack; at this point the re-activated process is informed why it has been re-activated, just in case it needs to act according to how the child process it executed came to be removed from the stack, whether it completed or was cancelled.


Reusable UI

My new (cheap) DVD recorder with hard-disk is pretty good, but you might be surprised with the bit that really impressed me! If I click the Browse button I get a list of contents on the hard-disk and pressing Enter will play the file. If I click the Copy (to DVD) button I see exactly the same UI, however, when I press the Enter button it will copy the file to a DVD instead of playing it.

You might not think this is very impressive, and maybe I'm just weird, but it is the first real-life example of reusable UI I have seen outside of computer software, in fact it is the first example I can recall seeing at all!

We've all heard of patterns in source code. The thing is, there are also patterns in GUI too. Using the example above the pattern is "Select a recording". This GUI asks the user for a specific type of information (which recording they wish to work with), and this input is used by two separate processes, let's say the "PlayRecordingProcess" and the "CopyRecordingProcess". What happens to the recording selected depends completely on what the process is doing.


Interfaces

I decided that instead of writing an ECO framework I would instead write the framework as a set of interfaces and then implement the classes however I want, my first (and only?) implementation will be an ECO one because the state diagrams save me from having to write a lot of code. I also decided that I would avoid properties in these interfaces, using methods instead. This is so that frameworks such as ECO which persist properties to a database wont attempt to persist the values returned by these interfaces.

IProcessStack

The requirements of this interface are

  • GetActiveProcess : IProcess - Gets the IProcess at the top of the stack, the active process

  • GetProcessCount : Integer - Returns the number of processes on the stack

  • GetProcess(Index : Integer) : IProcess - Returns the IProcess at the given index

  • GetProcessParameters : IProcessParameters - Gets a reference to an object that has properties which should be set by the user (explained below)

  • ActiveProcessChanged (event) - Triggered whenever the currently active process changes

  • ProcessParametersChanged (event) - Triggered whenever the ProcessParameters property changes, so that the GUI may react by displaying the currect controls to edit its properties


Initially I struggled with this interface. The class that implements it has an ExecuteProcess method. Originally this method accepted an IProcess parameter, but I didn't like that because an ECO implementation would require the Process to be an ECO object in order to have an association to it, and a remoting implementation would require this Process to be something else. If I asserted this restriction at runtime I would not get strong compile time checking.

Then I had a bit of a eureka moment! I was trying to make the interface a generic representation of the class, and this is not what an interface is for! An interface defines "how do I talk to this class" rather than "this is what this class is". We will never need to execute a process in a generic way, the interface is only there so that we can develop a generic GUI or multiple GUIs. Only the application layer itself will actually need to execute processes, this will be done by signals, processes, and ultimately once at the very start of the application when the main process is executed. So, dilemma over, on I go...

IProcess

This interface is very simple

  • GetProcessStack : IProcessStack - Used to access the process stack

  • GetProcessParameters : IProcessParameters - The ProcessStack will use GetActiveProcess.GetProcessParameters to determine what to present to the user

  • GetInstanceId : String - Used as a unique identifier for this instance. This can be used in a GUI to identify which target a signal should be sent to

  • Activate(ProcessActivationReason) - This method will be called by the ProcessStack whenever the process becomes the active process. The reasons for activation are ProcessExecuted, ActiveProcessCancelled, ActiveProcessCompleted




IProcessParameters

As promised earlier I will explain IProcessParameters. A process may pass through various internal states before reaching its final state and finishing. In some cases the process may be able to obtain all of the information it requires from automated sources (a database, config file, etc) and therefore require nothing from the user. It is more likely however that the user will be required to provide instructions or data at various points.

Whenever this situation occurs the Process will update its ProcessParameters reference and execute the ProcessStack.ProcessParametersChanged event. For example

public class AuthenticateUserParameters : IProcessParameters
{
private string userName;
public string UserName
{
get { return userName; }
set { userName = value; }
}

private string password;
public string Password
{
get { return password; }
set { password = value; }
}

Guid IProcessParameters.GetTypeUniqueIdentifier()
{
return new Guid("DEADBEEF-4F89-11D3-9A0C-0305E82C3301");
}
}

When a process needs to authenticate a user it will need the user to enter a username and password. The process would update its GetProcessParameters reference and then trigger the ProcessParametersChanged event.

At this point the UI layer, which has subscribed to this event, will get the GUID that identifies the type of the parameters, this is just a unique ID that says "I need the user to fill out AuthenticateUserParameters". The UI can then find a suitable control to present to the user and databind it to this object.

Once the user has typed in their username and password they would click a UI element that has been created to represent a signal "OK" or "Login" or something like that. This signal will then be sent to process and this will trigger either an internal state change within the process, or an exception explaining that the username or password is incorrect.

Just like with my clever DVD HDD recorder the UI has absolutely no idea why it is being used, all it knows is that it has been asked for certain information and should present this request in a format acceptible to the user. This UI may therefore be reused throughout the application. The application may ask the user to log in initially, and then at a later time the user may exceed some authority level at which point the same ProcessParameters would be used to capture the username and password of the user’s manager before being allowed to continue.

UI patterns appear all over the place. Here are some examples:

  • Prompt for a username and password

  • Ask the user to confirm or cancel an action. "Are you sure?" Yes/No

  • Select an item from a list

  • Enter a list of numbers in a grid, such as stock quantities during a stock check, or a purchase order




Summary

This one has been pretty conceptual really. What we have now is a clear separation between database, business objects, application layer, and user interface. This separation not only encourages clearly focused code, but also encourages reuse of both processes and composite UI controls.

Using this separation approach also allows us to have multiple UI's without having to implement any logic in the UI (except for code to enable, disable controls etc). In addition this clear separation would allow us to place the layers on different computers. We might have a single database, a farm of application servers, and each client connecting via either the Internet or directly through smart(ish) UI applications, or even a combination of both!

2006-12-12

Onion, part 3 (teaser)

Seeing as I haven't had enough free time to write part 3 recently I thought I'd post this little teaser. Would it surprise you to know that this very diagram generated into code and I was able to run it?

The signals on the ShowWelcomeMessage had to be hand coded, but hopefully I will be able to get some kind of custom code generation plugin to get the signals auto generated in future.

Things that annoy me about C#

Generally I really like C#, but there are a few things I don't like so here I am to have a bit of a moan about them.

01) I cannot specify constructor order!

When I add a constructor to a new class I cannot determine when I want the base class's constructor to be called, it is always called before any of the code in my new constructor. This means that if my base constructor calls a virtual method, and I override that method in my new class, I cannot always initialise any members the method might use.

public class BaseClass
{
public BaseClass()
{
Initialize();
}

protected abstract void Initialize();
}

public ChildClass : BaseClass
{
private readonly object SomeReference;

public ChildClass(object someReference)
: base()
{
this.SomeReference = someReference;
}

protected override Initialize()
{
SomeReference.ToString(); //Oops, it's null!
}
}


Why must I always call the inherited constructor first? I was pretty annoyed about it so I wrote to Anders H. It was nice that he wrote back, but "went off on one" about why he didn't want to implement named constructors (as per Delphi) and in the process completely neglected to answer the question.

The thing is, this is only a C# limitation. The .NET framework itself allows you to call a base constructor at any point you like!


02) No virtual class methods

I can't recall how many times I was temporarily banned from the C# channel on IRC (EFNET) because of arguing about this one. I think virtual class methods can be useful.

Now I must admit that since moving over to C# and not being able to use them I have found that most of the things I was using them for were probably wrong, for example using them to construct object instances when really I should have been using a factory pattern. However, sometimes they save so much coding that they are just useful!

I currently want to know if an instance of a subclass of Signal can be sent to an instance of ISignalTarget.GetSignalPermission() but I want to know this before creating an instance of the Signal subclass. I could do this by reflecting over the static methods of the class for a method with a certain signature but lets face it, the code isn't going to be strongly checked at compile time (accidentally typing the "GetSignalPermission" parameter incorrectly) and it is going to be the fastest code in the world either when I have to repeatedly check for compatibility between a list of ISignalTarget and a list of all possible signals!

At the moment I am having to write a factory class for every signal class. This means I am having to probably 10 times more work than I would if I had virtual class methods.


03) No strong type declarations!

Sometimes I just want to be able to say that I want a Type passed as a parameter, but this Type must be a subclass of Button or Control or something. I end up having to check the parameter at runtime, but I want to ensure the parameter is correct at compile time!

In Delphi I can do this

ControlClass = class of Control;

and then everywhere I refer to a ControlClass in parameters / variables etc it is automatically constrained "where xxxx : Control".


04) Array accessors on properties.

Why oh why can't I implement code like this without having to write a custom list class...

if (!Supplier.CatalogueItems["123ABC"].Discontinued)

I just want to be able to add an array property to Supplier, something like this

public CatalogueItem CatalogueItems
{
get[string catalogueNumber]
{
return SomeHashTable[catalogueNumber];
}
}



I think that's about it for now :-)

2006-12-11

Seven deadly sins of application development

Here is a list off the top of my head

1 - Ugly code!

Why do some people set local variables to null when they have finished with them? This is .NET! The garbage collector will collect "unreferenced" objects when it is ready, an object is unreferenced if you no longer use the variable that holds the reference to it!

Why do people name variables so poorly? Firstly I *hate* Hungarian notation. C# is a strongly typed language so I cannot multiply a boolean by a string, so why do people use variable names like "bIsMale" and "iAge"?


2 - Catching all exceptions

An exception is something you expect to happen, but shouldn't happen if all goes well. If such an exception occurs you might know how to solve the problem and enable your application to continue, but if the exception type was unexpected how can you possibly know the cause of the error or what state your application is now in? Therefore I hate code like this

try
{
DoSomething();
}
catch
{
}

3 - Accessing camelCase members

I really dislike it when people use camelCase names for private members with no property accessor. Sure, at the moment you don't need to execute any code whenever that member is read/written, but in future you might! If you name your private members with PascalCase instead then it becomes very easy to implement a property by changing it to camelCase instead, and then every reference to that member will automatically access your property instead.


4 - No testing

If something is so simple that it cannot possibly fail, it will fail! If you can't see why something could possibly fail it is because you haven't tested it, once you have tested it you will see all sorts of ways to make your code fail that you had not previously thought of.


5 - Implementing anticipated features

I hate being asked to write features that "might be needed in the future". Inevitably they are rarely needed and merely take away time from the development of features that are.


6 - Being told how to do something rather than what is needed

Managers who used to be programmers are the worst for this one! They see a requirement and the only way they can think of describing it is in a technical way. 1) Do this, 2) Do that, 3) Check this. From this description you have to reverse engineer what they are trying to achieve in order to come up with the original requirement. It's a bit like Chinese whisper, and you never end up with the original requirement. Worse still, you don't know why the customer requires the feature.


7 - Unreasonable deadlines

It is surprising how quickly an unmissable deadline is rescheduled when it is missed, this is usually a sign of a deadline that has been concocted for no apparent reason other than to make your manager look good. Unreasonable deadlines just stress your developers, and humans don't work so well under stress.

Have time off work, you will be more productive than if you work lots of hours. Don't let the consequences of failure drive your coding, write the best possible solution given a reasonable amount of time to develop it. I hate the "just get it out of the door" types, but perfectionists must also note that an undelivered application is far from perfection!

2006-12-08

Onion, part 2

Going beyond wizard interfaces

If someone had said to me "Process oriented application", in the past I would have thought to myself "Wizard-like interface". Whereas I find "Wizards" very useful in certain situations I also find that they are too time intensive for a user who knows what they want to do, how to do it, and just want to get on and do it!

The thing is, I think an application layer should be process driven. The business objects layer is there in order to represent the logical business entities of the application, and the application layer should be there to drive the logical flow of the users' interaction with those objects. So, the question is "How do we implement a process oriented framework without enforcing a wizard-like interface?" The answer I think is to use Signals.

Signals

The process may imply that there is always a specific path through an application. Although the user may influence that path by selection options along the way, it is also necessary to allow the user to perform a selection of adhoc actions at any given time, these tasks should in some way relate to the action that the user is currently performing. For example, whilst selecting a customer from a list it might be reasonable for the user to decide to view the details of one of the customers within the list before deciding they have chosen the correct one, or to edit the customer's details if they should spot a mistake.

So what exactly is a Signal and how would it work? Think of a signal as a method. Instead of this method being assigned directly to a class and therefore available to each instance of that class, a signal is a more of a "command" object which may be sent to a target object instance. The properties of this command object are akin to the parameters of a method.

You may ask yourself "If a Signal is so similar to a method, why wouldn't I just use a method instead?". Well, there are a number of benefits:

  • Either the SignalTarget is aware of the Signal type and reacts accordingly upon receiving it, or the Signal is aware of the SignalTarget and performs an operation upon it.
    • It is possible to add methods to specific instances rather than only at class level.
    • It is possible to extend the functionality of classes without having to alter their source code, which is excellent for allowing customers to have customisations written without having to resort to checking conditional compiler defines.
    • It allows you to keep the source of the target class "clean" rather than polluting it with methods for every possible feature.
  • A signal may be received by more than one type of target. Using this approach helps to introduce a kind of "business dictionary" for the application. If the same term is used throughout the application (eg "Stream to file") the user will find it much easier to understand the application.
  • Using a signal approach makes it very easy to ascertain which operations are available for a given target. This also allows us effectively to hide or disable a method given the current state of the object. A method based approach would require the use of reflection, and also a way of determining which methods of the class are their for implementation purposes and which are there to serve the user.
  • Signals may be registered from dynamically loaded assemblies, providing the ability to customise object behaviour at runtime.
  • Signal permissions may be used to introduce role based behaviour. The available actions depend entirely on the roles of the person using the application.
So, sermon over, how do they work?

Firstly I wanted to avoid instances. Signals are a way of behaving, a "thing you do" rather than a "thing you are", therefore a Signal should be an interface.

Signal

Not surprisingly I have chosen the name ISignal for this interface, and it has only a single member:
  • Execute(ISignalTarget)
This method should be invoked by the ISignalTarget to which the signal has been sent. This way the ISignalTarget always has "first refusal" and may decide whether or not to allow the signal to continue.

Signal target

The ISignalTarget interface is used to identify an object as a target for signals. This interface has two methods:
  • AcceptSignal(ISignal)
  • GetSignalPermission(ISignalFactory) : SignalPermission
AcceptSignal is executed when a signal is sent to an instance of ISignalTarget. This is where the class itself is able to react to a received signal if it is explicitly aware of that signal type. By default this method should execute signal.Execute(this) so that the signal (which may be unknown to the target) may perform any actions.

GetSignalPermission is a way of determining compatibility between the ISignal and the ISignalTarget. It returns one of the following values:
  • Supported: The ISignalTarget is aware of the signal type and will perform an action in response to receiving it, therefore the target and signal are compatible.
  • Unknown: The ISignalTarget is unaware of the signal type. Therefore the target and signal are only compatible if the signal itself indicates that this is the case.
  • Prohibited: The ISignalTarget is aware of the signal type, and regardless of what the signal indicates it will not accept the signal, therefore the signal and target and incompatible.
  • Disabled: The ISignalTarget is aware of the signal type. Although this signal is normally accepted it will not be accepted at this point in time, this could be due to the current state of the ISignalTarget (for example, "Archived"), therefore the signal and target are compatible but the signal may not be sent.

ISignalFactory

As you probably know, in .NET there is no (easy) way to implement virtual class methods. Class methods would have been very useful in this scenario because it is possible for a signal to indicate support for a target, yet it makes no sense to create an instance of every ISignal for the purpose of performing this query. In fact memory consumption is not the only reason to not take this approach:
  1. We have no way of knowing how to create an instance of the ISignal type. We could assume that we should always look for a parameterless constructor and invoke it using reflection, but this would not work for classes which require a parameter in their constructor. For example, if instances of the class belong to some kind of "object space" or a transaction which must be passed as a parameter.
  2. If we did create an instance of every type they would be used only for obtaining permissions. In a multi-user environment such as remoting or ASP .NET applications those same instances would be used by every thread, so setting their properties (aka method parameters) would be nonsense.
This is the reason I opted for an ISignalFactory interface. If .NET had the ability to handle virtual class methods then this would not have been necessary, but it's better to work around a limitation than to give up and complain about it...

The ISignalFactory interface has the following methods:
  • GetSignalPermission(ISignalTarget): SignalPermission - Identical to the behaviour of the ISignalTarget method.
  • CreateSignal(object applicationData): ISignal - Creates an instance of the ISignal to send to the target. The "applicationData" can be an object, struct, or whatever the developer decides is necessary to pass to the factory in order to construct an instance (Such as an "object space" or a transaction object of some kind).
  • Category : string - This can take any format the developer wishes and may be used to categorise signals within the UI. For example "\MainMenu\File\Edit" would indicate to the UI layer that the signal should appear in the File->Edit menu of the application's main menu.
  • DisplayName : string - This is the text to display in the UI, for example "Copy". This could either be the literal text to display or the identity of a string resource within the UI to retrieve and display.
  • UniqueId : string - This is a way of uniquely identifying the signal, it could be the namespace + classname, or better still a Guid as this will never change.

Signal catalogue

There is one final piece to the Signal jigsaw, the ISignalCatalogue. I have implemented a class for this item as it has very specific behaviour and I can't see any way in which it would be useful to modify it. Despite this I have created an ISignalCatalogue interface for the purpose of interacting with the class just in case somebody wants to write an alternative implementation.

The signal catalogue is a single point from which it is possible to get a list of compatible signal factories and signal targets. My implementation scans all loaded assemblies during its construction and looks for classes which implement ISignalFactory and creates an instance, this instance is then held in a list owned by the catalogue. It is also possible to manually register an instance or an assembly in case you need to dynamically load signals/factories from a DLL in order to provide customisation.

Once the signal factory is created it provides the following functionality:
  1. Given an ISignalTarget it will return a list of compatible ISignalFactory instances.
  2. Given an ISignalTarget and an ISignalFactory it will return a "SignalPermission".
  3. Given the UniqueId of an ISignalFactory it will return the ISignalFactory so that an ISignal may be created and sent to the target.
The signal catalogue is used as the single point of reference for obtaining compatibility because it performs the task of querying both target and factory before determining the result.

A SignalPermission is obtained from both the ISignalFatory and the ISignalTarget. The combinations listed below will provide the specified result:
  • Factory = Prohibited : Prohibited
  • Factory = Unknown : Returns the permission of the ISignalTarget.
  • Factory = Supported:
    • Target = Disabled : Disabled
    • Target = Prohibited : Prohibited
    • Target = Supported : Supported
    • Target = Unknown : Supported
  • Factory = Disabled:
    • Target = Supported : Disabled
    • Target = Unknown : Disabled
    • Target = Disabled : Disabled
    • Target = Prohibited : Prohibited

Conclusion

It is now easy to find a list of "commands" that are compatible with any given ISignalTarget by retrieving a collection of ISignalFactory. The information from this list of factories may be presented in the UI for the user to choose from, and may be logically grouped by the UI through use of the factories' "Category". In fact it would also be possible to differentiate signals available to the UI from internal signals via this category, maybe anything starting with "\UI" should be displayed.

If a "Process" within the application layer implements ISignalTarget then we now have the ability to specify within our application what exactly the user can do with that process. Rather than having "Next" and "Back" methods on our process we can implement "Next", "Back", "Okay", "Finish", "Confirm" or whatever type signals are most appropriate for the given task. In addition it is possible to display to the user a menu of possible actions, a group of actions (similar to the XP control panel) to the side of the form showing related tasks they could perform.

Most importantly though these signals provide a way of telling a physically separate layer what commands are available without that layer having to have any knowledge of the signal or the target. This makes it possible not only to have separate layers on a single machine, but also very easy to stream the information as XML to a physically separate layer on a client machine.

Summary

In this entry I have discussed a technique that can be used for many purposes, but in this case it will be used to aid user interaction with the application. In the next in my mini series I will describe the process layer in more detail, how it operates, and how to allow the user to interact with the information presented to them. Keep an eye out of Onion part #3 :-)

2006-12-02

Onion

An onion has layers...
...Shrek

In the beginning

When I first started writing applications they would typically be a single program editing a single datasource. As time went on this changed because people wanted to share data, so client/server applications appeared.

It didn't stop there though, N-Tier applications became much more common. Applications were typically split up like this:

RDBMS--DAL--Business classes--UI

Due to the fact that I use ECO for my business layer , and that ECO has the DAL built in, the illustration for me would look something like:

RDBMS--Business classes (ECO)--UI

Thinking of business problems as classes instead of tables really helps to simplify your design, so I have been very happy writing applications this way for some time now.


The application layer

In December 2005 I was tasked with the job of writing quite a complicated Compact Framework application. Although this application was going to be complicated it needed to be very simple to use, as the customer's employees were not all computer literate.

I decided to take a wizard-like approach to the application. This had the benefit of guiding the users through their daily tasks in a way that offered them options which were relevant to what they are currently doing instead of overloading them with options in a menu. However, this approach was not only beneficial to my customer's users, it taught me something I think is very valuable. Applications should have an extra layer!

The problem is that many programmers will design their UI to reflect their database layout or their business classes. For example, a developer might create a "Vehicle" form, but when presented with this form what would a user do?
  1. View its service history?
  2. View a planned service schedule?
  3. View purchase + depreciation information?
  4. Hire it out to a customer?
If the developer does not know what the user wants to do with the vehicle in question then the only sensible thing to do is to show all of this information (maybe in tabs) and allow the user to do anything they want isn't it? Can you imagine a person who is non computer literate being shown such a complicated form and being expected to to just "get on with it"? I hope your training and support departments are well staffed! :-)

The solution is to present the user with a task instead of an instance of a business class. So instead of editing information within a complicated Vehicle form the user will enter a few basic details in order to complete a "TransferVehicleOwnershipActivity". The word "Activity" could easily be replaced with something like "Task", "Action", or "Process", but the general idea is that you present the user with something to do rather than presenting them with the object on which their actions will be performed.

Code in the UI

There are some things that simply don't belong in your business classes and this sort of code often gets written into the UI layer. Code in the UI layer should only concern itself with UI related issues, such as enabling/disabling controls. An example of this would be some kind of an email client.

When you open a message in your inbox it is automatically marked as read, but where should this code be written? I'd certainly agree that you should have a method on your IncomingMessage class
void MarkAsRead()
but this method still needs to be invoked! You may find yourself reaching for the Form.Load event, and I really wouldn't blame you if you did.
void IncomingMessageForm_Load(object sender, EventArgs e)
{
this.IncomingMessage.MarkAsRead();
}
but what if in the future you needed to create a web based version of your application? Obviously you'd expect to have to write some code into your WebForms but again this should be code to handle things like enabling/disabling UI controls. To get the same functionality you'd have to copy the code from your WinForms application.

Once you have finished copying all of the logic from your WinForms application and you have a functionally identical WebForms application what next? Someone changes one of the UI applications and forgets to update the other! We should have known better, having code like this causes a bad smell...
void IncomingMessageForm_Load(object sender, EventArgs e)
{
//If you update this code make sure you
//update IncomingMessage.aspx!
this.IncomingMessage.MarkAsRead();
}
This is exactly the sort of scenario that is easily solved by having an application layer!


Onions have layers

Onions have layers. This is why I have decided to name my application framework "Onion". I'll be playing around with Onion in my spare time and blogging about my experiences and decisions, so keep an eye out for part #2!

OutOfMemory, or maybe not?

I've been writing an app for the compact framework for some months now. It's quite a complicated app that includes an object persistence framework, a task oriented application layer and a loosely coupled GUI which is generated through factories (the app only has 1 form, but lots of user controls + factories).

The app has been experiencing apparently random OutOfMemoryExceptions, no matter how hard I have tried I have found it impossible to reproduce one of these errors. I have spent quite some time really optimising the memory useage of my OPF so that it works on the bare minimum of memory yet still operates quickly enough (and I'm very pleased with its performance too). However, the OOM exceptions persisted!

I wrote a logging tool which records the last X actions the user performs, when an unexpected exception occurs this log is written to disk along with a stack trace of the exception. I noticed that the top of the stack trace always read...

at Microsoft.AGL.Common.MISC.HandleAr()
at System.Windows.Forms.Control._InitInstance()
at System.Windows.Forms.Control..ctor()
So it seems that my app failed each time I tried to create a user control. Considering my GUI gets updated by disposing of the current control and then replacing it with a control created by a factory this happens quite a lot, but nowhere more frequently than when I am importing data (updated every 500ms). So now I know a good candidate for finding the bug but no way to actually reproduce it in my app. Strangely enough it wasn't what the user was doing in my application that mattered but what they were doing in another one.

It would seem that if the PPC application is doing something that takes a few seconds (on a 350Mhz CPU that isn't rare) the user decides to pop off to another part of Windows and check the battery level, check the memory usage or just generally "muck about". Finally I had the last piece of the jigsaw! I ran my data import and then started to select/deselect files in the Program Manager. OutOfMemoryException!

The strange thing was that whenever this exception occured I would have at least 5MB of RAM free. It would seem that if a CF WinForms app tries to call Microsoft.AGL.Common.MISC.HandleAr() at the same time some other app is changing its GUI the call will fail with an OutOfMemoryException. So I decided to put the control create in a loop like this
int triesLeft = 20;
while (true)
{
try
{
return {Something confidential that creates the control}
}
catch (OutOfMemoryException outOfMemoryException)
{
triesLeft--;
if (triesLeft == 0)
throw outOfMemoryException;
Thread.Sleep(500);
}
}
The hope was that if I tried up to 20 times, half a second apart, the chances of another app updating GUI at the same time would be very low. Unfortunately even if I closed all other apps during this loop the creation of the control would still fail. It seems that once it fails there is no hope, all is lost, it just packs up and refuses to work at all!

Instead of updating the GUI straight away I started a timer "GuiTimer" with an interval of 100ms. The Tick event of this timer first disables the timer and then creates the control needed. Maybe the Timer messages sent be WinCE are all done from a single thread or something, I don't know because I haven't looked into it. This doesn't fix my problem, but it does make it occur less often.

Despite my attempts to create a small app to reproduce this problem I have not been successful, yet I can reproduce it easily in my main application.

The above attempt however made no difference, .NET will perform a garbage collection before throwing an OutOfMemoryException so it was really quite pointless. A much better solution to my problem has been to create my user controls only once and then to keep hold of them, reusing an existing instance seems to almost completely eradicate my bug.

OCL aliases

This is just a copy/paste of a reply I made in a newsgroup, but I think it is quite informative so here it is....

KEY
Square brackets denote a class [Person]
Rounded brackets denote a role (Pets)

Let's say you have a Car class and a Garage class, an individual car is regularly serviced at a specific garage so you have the following association

[Garage] (Garage) 1----0..* (ServicableCars) [Car]

Car.allInstances->select(Garage.Code = '1234')
The above OCL will fail because "Garage" is a class so the parser is expecting stuff like "allInstances". So you might think this should work

Car.allInstances->select(self.Garage.Code = '1234')
but it doesn't because "self" refers to the root object and not the object at the parsed node where it is specified. This is what aliases are for:

{aliasname} + {pipe}

Car.allInstances->select(currentCar | currentCar.Garage.Code = '1234')
This was possible in Bold too (native windows predecessor to ECO), but in Bold you could differentiate between members and classes through case sensitivity. PascalCase always meant a class whereas camelCase always meant a member, so this would work in Bold but doesn't in ECO as it is not case sensitive

Car.allInstances->select(garage.code = '1234')
Although ECO is not case sensitive you will notice that I still use the
sensitive format for clarity :-)

Ye olde C64

I've been reliving my childhood and playing with WinVice, a Commodore 64 emulator.

A friend and I were working on a game at the point the C64 died and it never got finished. We recently dusted off those old 5 1/4" disks and finally worked out how to get it working again! If anyone fancies taking a look you can find it here

http://noname.c64.org/csdb/release/?id=33963

Be warned though, it really was unfinished. Some rooms lead to dead-ends and there is no way to quit the game, so the occasional reset (of the C64) + reload is required!

WeakReference woes!

Take a look at the following code

if (myWeakReference.IsAlive)
(myWeakReference.Target as SomeClass).DoSomething();

Do you see the mistake? The WeakReference may return "true" for IsAlive, but because the garbage collector runs within its own thread the value may actually get collected before the next line (because WeakReference.Target does not prevent the GC from collecting the value). I've been using WeakReferences quite a lot recently so I was happy to see that the following changes fixed the occasional NullReferenceException occurencies I had been seeing which were quite difficult to track down!

SomeClass myInstance = (SomeClass)myWeakReference.Target;
if (myInstance != null)
myInstance.DoSomething();

The IsAlive property in my opinion is utterly useless (it is the same implementation as Target). I think MS should just remove it, you could say it would break existing code, but I say it would force people to fix it!

ModelMaker is great!

I have a few minutes to spare whilst I wait for an import routine to complete so I thought I'd quickly blog about something that recently got me excited.

I recently had a set of business classes that I had to move over from one persistence framework to another. I really wasn't looking forward to doing it, all of that development time only to end up with something I already have. The good news is that I had already designed my business classes using ModelMaker! I spent about one day writing a plugin that hooks into the MM code generation process and hey presto my business classes now conformed to the new persistence framework.

ModelMaker has so many great things in it. It seems that whenever I want to do something Gerrit has already implemented something to help. Just think, I only use Model Maker now because I won it on www.Delphi3000.com all those years ago. At the time I didn't even know what it was (I hadn't even heard of the UML) so I wasn't even going to claim my prize until a friend emailed me to tell me how lucky I was. I've been a happy customer ever since!

Validating XML against an XSD schema

I have a new job. Well, I say "new" but I actually started in December 2005. Anyway, this "new" job requires me to develop compact framework apps. Seeing as Delphi doesn't support CF I am developing my application in VS2005, it's a really nice tool but I miss ECO so much!

One thing I had to do was to write an XML importer routine. This would retrieve an XML file detailing tasks due for the next seven days and then import it into the PPC's database. Because I am not responsible for generating the XML, and because it is good practise anyway, I decided to create an XSD file to validate the XML. PDA's have very little storage resources (the flash card is shared between disk and memory) so I decided to read the XML file a line at a time using the XmlReader so that the contents don't exist on the flash memory twice (disk + memory), another benefit of this approach being that I can read it directly from a ZIP file too!

In dotnet V2 the class XmlValidatingReader is obsolete, not that it would have been much good anyway considering it is not implemented in V1 of the compact framework. After some playing around I was finally able to read the XSD information from a resourced embedded into my app, and read the XML one line at a time whilst validating it against that XSD file.

Here it is:

//1 Get a stream containint the XSD
Stream xsdStream = GetType().Assembly.GetManifestResourceStream("Eden.HandheldVendor.TaskFlow.DataImport.xsd");
//2) Create an XmlTextReader that uses this stream
XmlReader xsdReader = new XmlTextReader(xsdStream);
//Create an XmlSchema from the embedded XSD
System.Xml.Schema.XmlSchema xsdSchema = System.Xml.Schema.XmlSchema.Read(xsdReader, null);

//4) Create some XmlReaderSettings that use this XmlSchema
XmlReaderSettings readerSettings = new XmlReaderSettings();
readerSettings.ValidationType = ValidationType.Schema;
readerSettings.Schemas.Add(xsdSchema);

//5) Set an event for validation errors
readerSettings.ValidationEventHandler +=
new System.Xml.Schema.ValidationEventHandler(readerSettings_ValidationEventHandler);

//6) Create the xmlReader that will read the XML file using our reader settings
XmlReader xmlReader = XmlTextReader.Create(xmlStream, readerSettings);

//7) Now read the file
while (xmlReader.Read())

Roles

What is a Customer? If you are some kind of business service provider then it will be a Company, if you are a window cleaner then it will be a Person, but if you are a travel agent it could be either. What about a Supplier? You could argue the same as for a Customer, but you could also argue that a Company/Person could be both a Supplier and a Customer.

This is where Roles come in. A Supplier isn't something you are, it is something you do, you supply something; just as a Customer consumes something. I find that it is much better to enable "Things" to perform more than one action, it is rare that life is simple enough for everyone to perform only a single role in life. The typical solution to this is the Party/Role pattern.

Party role pattern

A good point to make here is that this is a pattern, not a set of classes intend for inheriting from! This means that when you have a Company (alias "Party") you should create a CompanyRole class and associate them, descending CustomerRole and SupplierRole from CompanyRole. The problem with approaching this simply as a pattern is that we still cannot make both a Person and Company share roles. Another typical solution to this is to descend both Person and Company from a Party class. I personally only like to inherit classes for behaviour purposes, not in order to inherit properties/associations - inheritence is I am rather than I have.

So, how do I implement this kind of behaviour in my own applications? I still use a Party/Role pattern, but I use a bit more aggregation:
Aggregated party role pattern

This model allows for a collection of roles which are collected by being associated with a RoleHolder. Some roles may apply to any kind of object that performs roles, whereas some such as EmployeeRole may only be applied to a Person. To implement this functionality the RoleHolder should not be used directly, instead a concrete descendant should be used for each type of class; Person owns a PersonRoleHolder and Company owns a CompanyRoleHolder. A role may permit or deny itself being applied to a certain type of RoleHolder via its virtual MayApplyTo(roleHolder) method, likewise a concrete RoleHolder may reject a certain type of Role via its virtual MayAcceptRole(role) method.

This gives us a way to ensure that certain types of Roles cannot be applied to certain types of RoleHolders, there must be mutual agreement before an association may be made. Now we need a way of retrieving the true owner of the Role, this is achieved by overriding the abstract object GetOwner() of RoleHolder. employeeRole.RoleHolder.GetOwner() would return a Person, whereas customerRole.RoleHolder.GetOwner() could return either a Company or a Person.

Finally we need a uniformed way to get to the RoleHolder of an object, seeing as Person and Company do not have nor need a common ancestor class the most appropriate way to check if it holds Roles and to retrieve its RoleHolder is to use an interface, IRoleHolder.GetRoleHolder is implemented on the business entities in order to provide this ability. A could of nice methods to add to the RoleHolder class + IRoleHolder interface are:

bool HasRole(Type roleType);
Role GetRole(Type roleType);

This way we can write code like this:

IRoleHolder roleHolder = (Person as IRoleHolder);
if (roleHolder != null && roleHolder.HasRole(typeof(EmployeeRole)))
{
EmployeeRole employeeRole := RoleHolder.GetRole(typeof(EmployeeRole)) as EmployeeRole;
employeeRole.TerminateEmployment(DateTime.Now);
}

Conclusion:

This approach allows a business entity to perform multiple roles at the same time. It also allows the same role to be applied to business entities that do not share a common ancestor class.

Free ECO book chapter

I've decided to release my book chapter "ECO III Services" free of charge. There are references in there to other chapters, please disregard these when reading. Any positive/negative feedback is welcome so that I may modify the chapter and improve it. There may be more chapters to come in the future but that wont happen for some time judging by my current work load.

Take a look at http://MyEcoSpace.net

ECO is so fast!

I spent the day (March 2005) training someone how ECO works.

Between 9:30 and 12:30 we went through the basics, how to create a model in a package (and why). We then covered the different component types, whilst making a simple client/server app.

Between 13:45 and 17:00 we went on to
  1. Create a server application
  2. Convert the client to connect to the server instead of directly to the DB
  3. Make the client synchronise with the changes from the server
  4. Create a web service which connected to the server for persistence
  5. Create a website which connected to the server for persistence

That's a lot of work to get done in 3.5 hours at the best of times, but when you are also explaining why everything is done the way it is then this is surely a reflection of just how quick it is to develop applications using ECO.

Everything we did was adhoc. I had no idea what the trainee wanted to cover before he arrived so everything was written from scratch. I'm quite frankly surprised that we managed to cover as much as we did!

What is ECO?

This question has been asked so many times. Jesper (one of the developers) recently explained it like this...


ECO, Enterprise Core Objects, is a model driven framework. In essence, it allows you to specify your application using a UML class model. This model is then transformed to source code, decorated with enough information to re-create the model information at run time.

The framework uses the model information (as contained in and re-created from the source code) to drive persistence, presentation, maintain the technical integrity of the business objects, manage bi-directional relations, derived associations and attributes, maintain constraints, offer services like undo/redo, object versioning and quite a bit more.

The value added proposition for you is that you can design your application on a 'higher level', without worrying about implementation details. Use the model not only for communication of ideas, but also as a part of your application. You can code your business logic on the same abstraction level as you design it. Typically this will allow you to produce more advanced applications faster, using less code. Less code means fewer bugs and simplified maintenance.

Adding runtime error messages to page validation

One of the requirements when writing a website I once created was the ability to model object constraints in OCL and then validating the current page against those constraints.

The problem with normal validators is that they are designed to validate individual controls rather than a list of constraints on an object. The approach I took was to create a validator which I can place on any web form and add as many errors to as I required.

The first step was to create a WebControl which supported IValidator

public class MultiValidator : WebControl, IValidator
{
}
I then added a list of strings to hold the error strings, and a method to add an error.
    private List Errors = new List();

public void AddError(string message)
{
Errors.Add(message);
}//AddError

When ASP .net validates a page it enumerates all IValidators within its own Validators property, and called IValidator.Validate(). To determine if the page is valid or not it then checks IValidator.IsValid.

To add custom error messages at runtime I decided to create a static validator class which always returns "false" from IsValidator.IsValid. For each error message in my MultiValidator I could then simply create an instance of one of these validators.
[ToolboxItem(false)]
internal class StaticValidator : IValidator
{
private string errorMessage;

#region IValidator
void IValidator.Validate()
{
}//Validate

bool IValidator.IsValid
{
get { return false; }
set { }
}//IsValid
#endregion

public string ErrorMessage
{
get { return errorMessage; }
set { errorMessage = value; }
}//ErrorMessage
}
Now that the StaticValidator was written, all I needed to do was to add the required IValidator implementations to my MultiValidator class.
#region IValidator
void IValidator.Validate()
{
isValid = (Errors.Count == 0);
foreach(string error in Errors)
{
StaticValidator validator = new StaticValidator();
validator.ErrorMessage = error;
Page.Validators.Add(validator);
Validators.Add(validator);
}//foreach errors
}//Validate

bool IValidator.IsValid
{
get { return isValid; }
set { isValid = value; }
}//IsValid
#endregion

Within a webform, I would now
  1. Set "CausesValidation" to false on my submit button
  2. Validate my object
  3. Call MultiValidator1.AddError() for each error encountered
  4. Call Page.Validate()
  5. Check Page.IsValid as normal
Using a ValidationSummary I could then display the broken constraints to the user for rectification. The whole source code is listed below....

MultiValidator.cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace DroopyEyes.Web.Controls
{
public class MultiValidator : WebControl, IValidator
{
#region Members
private bool isValid = true;
private string errorMessage = "";
private List Errors = new List();
private List Validators = new List();
#endregion

#region IValidator
void IValidator.Validate()
{
isValid = (Errors.Count == 0);
foreach(string error in Errors)
{
StaticValidator validator = new StaticValidator();
validator.ErrorMessage = error;
Page.Validators.Add(validator);
Validators.Add(validator);
}//foreach errors
}//Validate

bool IValidator.IsValid
{
get { return isValid; }
set { isValid = value; }
}//IsValid
#endregion

protected override void OnInit(EventArgs e)
{
base.OnInit(e);
Page.Validators.Add(this);
}

protected override void OnUnload(EventArgs e)
{
if (Page != null)
{
Page.Validators.Remove(this);
foreach(IValidator validator in Validators)
Page.Validators.Remove(validator);
}//Page != null

base.OnUnload(e);
}


public void AddError(string message)
{
Errors.Add(message);
}//AddError


#region Properties
[Bindable(true)]
[Category("Appearance")]
[DefaultValue("")]
public string ErrorMessage
{
get { return errorMessage; }
set { errorMessage = value; }
}//ErrorMessage
#endregion
}
}
StaticValidator.cs
using System;
using System.ComponentModel;
using System.Web.UI;

namespace DroopyEyes.Web.Controls
{
[ToolboxItem(false)]
internal class StaticValidator : IValidator
{
private string errorMessage;

#region IValidator
void IValidator.Validate()
{
}//Validate

bool IValidator.IsValid
{
get { return false; }
set { }
}//IsValid
#endregion

public string ErrorMessage
{
get { return errorMessage; }
set { errorMessage = value; }
}//ErrorMessage
}
}

Reverse derived columns

The focus of this post will be "Event derived columns". Jan Nordén (Borland) pointed me in the direction of these things recently when I asked him how to solve a GUI problem I had. When I used Bold for Delphi there was this really nice GUI component called a BoldSelectionListBox. This component would let me show a list of items with a CheckBox next to each row, ticking / unticking a box would add / remove an association between the object selected and some other object of my choice.

Using this BoldSelectionListBox I would be able to specify a User (for example) as the context and then have a list of all Groups in a kind of CheckListBox. Ticks would appear in all CheckBoxes where the User is belongs to the group listed, and no tick where they are not part of the group. The extra clever part of course is that by ticking a CheckBox Bold would create the link object required to tie the User to the Group, and add it to User.Groups (and of course Group.Users).

Seeing that ECO does not introduce any GUI controls (it instead provides .net DataBinding interfaces so that you can use standard controls), I suspected that I would not be able to achieve the same sort of effect. Jan kindly sent me a small demo showing how to achieve this using only DataGrids. I soon had this logic written into my own app, and it worked beautifully!

I added expression handles for my Groups (ehGroups) and Users (ehUsers), linked them up to a grid each and added Add / Delete buttons. I set each of these expression handles to retrieve all instances, "Group.allInstances" and "User.allInstances".

On the left side of my GUI I had all of my Users listed, and on the right I had all of my Groups. I now wanted to add a CheckBox next to each Groups, so that I could specify whether the user belonged to the Group or not. The first problem to tackle is to know which User is currently selected in the grid. To do this I added a CurrencyManagerHandle named "chCurrentUser", set its RootHandle to ehUsers, and its BindingContext to UsersDataGrid. Now chCurrentUser holds a reference to the current User, nice and easy.

Next I needed to get a CheckBox column in my GroupsDataGrid and set AllowNull to False. To do this I added a GroupSelected column to ehGroups and set its type to System.Boolean. Note: The "Add" button in the Columns editor has a DropDown icon next to it, click that and select EventDerivedColumn. I then added the additional column to my GroupsDataGrid, to make sure it was a CheckBox I chose the DropDown list on the Add button and selected DataGridBoolColumn. I set the MappingName to GroupSelected.

So far we have everything we need to see the CheckBoxes, but no way to tell the DataGrid whether the checkbox should be ticked or not. To do this we need to write some code into ehGroups' DeriveValue event, but first I want to add something to make the code a little easier to write. I added a new ExpressionHandle ehUserGroups, the RootHandle was the CurrencyHandle (chCurrentUser) and the expression was "self.groups". This would allow me to easily check which Groups the current user belongs to.

Now to write some code to calculate the value of ehGroups.GroupSelected. This is done in the ehGroups.DeriveValue event, like so:
private void ehGroups_DeriveValue(object sender,
Borland.Eco.Handles.DeriveEventArgs e)
{
switch (e.Name) //One event for all derived columns
{
case "GroupSelected":
//Get a list of allowed groups for this task
IElementCollection groups =
(IElementCollection) ehUserGroups.Element;

//Avoid a null reference exception
if (groups == null)
{
//return an element representing the constant "false"
e.ResultElement =
EcoSpace.VariableFactoryService.CreateConstant(false);
return;
}

//Observe the ehUserGroups element, this tells us
//when the element changes so that we may
//invalidate the GUI
ehUserGroups.SubscribeToElement(e.ResubscribeSubscriber);

//Also observe the items within the list
groups.SubscribeToValue(e.ValueChangeSubscriber);

//If user.groups contains the current Group then return
//an element representing the constant "true"

if (groups.Contains(e.RootElement))
e.ResultElement =
EcoSpace.VariableFactoryService.CreateConstant(true);
else
//Otherwise return an element
//representing the constant "false"
e.ResultElement =
EcoSpace.VariableFactoryService.CreateConstant(false);

break;

default:
throw new NotImplementedException(e.Name + " not derived properly");
}//switch
}//ehGroups_DeriveValue
And finally we need to have a way to allow the user to tick / untick a CheckBox and have the relevant association added or removed from the user.groups association. This is done in the ehGroups.ReverseDeriveValue event, like so
private void ehGroups_ReverseDeriveValue(object sender,
Borland.Eco.Handles.ReverseDeriveEventArgs e)
{
switch(e.Name) //One event for all derived columns
{
case "GroupSelected":
//Get a list of current groups for the user
IElementCollection groups =
(IElementCollection) ehUserGroups.Element;

//Avoid a null reference exception
if groups == null)
return;

//Typecast the value being set to
//a Boolean (from the datagrid CheckBox)
if ( (Boolean) e.Value)
{
//If the checkbox has been checked,
//and the ticked Group is not in
//user.groups then add it
if (!groups.Contains(e.RootElement))
groups.Add(e.RootElement);
}
else
{
//If the checkbox has been unchecked,
//and the ticked Group exists in
//user.groups then remove it
if (roles.Contains(e.RootElement))
groups.Remove(e.RootElement);
}
break;
}//switch
}//ehGroups_ReverseDeriveValue
It may take a little bit of getting used to, but if you read it through a few times you should be able to get the jist of it. This basically gives the developer the power of reversed derived attributes for use solely within the GUI. This means that we can do some clever things with ECO objects without having to include reverse derived attributes in the model in order to satisfy GUI requirements.

Moving home

This is my first blog entry on this site. I'm moving from my previous blog address over to here. At first I'll be reposting some of my more interesting entries from my old blog.