For Windows Phone, By Surface Pro

Surface Pro and Windows Phone

I was just reading through my usual news feeds, one of them being Windows Phone Central when I came across a cool post from Microsoft, “Geek out with Surface and win!“. So I started thinking about what I could do. I am one of the very few people in South Africa that has a Surface. The shipping is what kills, but I had to get one. I am also a developer, so it seemed natural to get a tablet that ran Visual Studio. Only the tablets running Windows 8 Pro was of any real value, so after a bit of thought, I got myself a Surface Pro – and have been loving it ever since.

Just to show how great this tablet really is, I can say that I gave my PC away and never once regretted that decision. I have developed an app for Windows Phone 8 exclusively on the Surface. It is available on the Store: GoMetro | Windows Phone Apps+Games Store. I am proud of this app as it is my first for Windows Phone as well as it being developed on my favourite device – my Surface.

Some time ago, before I started the app, I wrote a post on what I was going to do: “GoMetro for Windows Phone 8

I love my phone and my tablet, and I develop apps for both. I am working on several projects ranging from websites, Windows Phone and desktop. C# is my main language, but I do do extensive JavaScript and CSS development for those wbsites.

As you can see, my Shift and Ctrl keys are a bit worn from my continuous use! I won’t recommend the touch cover for extensive use, certainly not for development, but it can be done – you just hve to get used to typing.

Overall the tablet is very good, it can play the average game as well without a real problem. Excellent for surfing the web and social networking, emails and other light use. I read my digital comics on this device and they are clear, crisp and vivid.

On more serious tasks, it is also oustanding, development is slightly harder due to the small screen, but I developed an app without any external monitors so it can be done. I have done some photo editing and 2D drawings without any problems. I run a virtual machine using Hyper-V for one of my projects, and have not noticed a real problem.

The OS is snappy an the apps that I use are very good: Mail, People, Facebook, Internet Explorer, Skype, Nokia Music, Adera (game), Fresh Paint and Autodesk Sketchbook. There have been some tremendous improvements since Windows 8.1.

The USB port in the charger is used to charge my phone and the stylus is used for my assignments and note taking in both OneNote and Word. I have done some nice drawings and the stylus is pretty accurate. The USB port is used mainly to connect my external HDD, but is used for other things, even for emergency charging of my phone when I don’t have my charger around.

The sound quality is good, sometimes a bit soft, but that doesn’t really affect me as I use it mostly after hours and late into the night. Movies are awesome, colours are good and there is no lag in the images that I can see.

I find the battery life is fairly substantial, setting my brightness to about 75% usually extends it for several hours – I have never actually measured, but it is more than 5 hours. Often, in my bed at night I can read my comics, setting the brightness right dow and it’ll last for hours even from below 50%.

One more thing, SkyDrive. This is integrated into the OS, but that is just cool. The most totally awesome feature is the photo file management: I have got over 8GB on SkyDrive, but it only takes up 170MB on disk. This is the greatest and coolest of all those cloud guys. Especially as the tablet has limited storage of 128GB. I have just clocked at 50% on my HDD, so the space is not being swamped, even with Office, Visual Studio and its SDKs. Although, I do have an external drive for my music, movies and other large games such as Battlefield.

I look forward to trying and maybe buying the Surface Pro 2…

GoMetro for Windows Phone 8

I was just recently given the opportunity to work on a Windows Phone 8 app for the GoMetro system. Currently they have the mobile website and are working on a new-and-improved version, but not yet a Windows Phone 8 native app. As I have a shiny new Nokia Lumia 820, its no good going to the web browser each time I need to take a train.

They have a REST API that I will be using, but I hope to use some caching and personalization on the user’s device. The next version of the GoMetro engine will have more features like these, but until then, it will have to remain local only. Of course, once the next version is out, I should just be able to import them into the online database.

I have been working on some concept designs for this app, which I show below. I am using ProtoShare instead of manually drawing each screen. This is a cool tool for quick screen creation, although it is not free. I will give it a try and see what happens.

This application so far has 3 main areas of importance:

  1. Announcements
    This is the area where we will be able to display any notifications about the lines, Metrorail and the app.
  2. Favourites / Shortcuts
    This is probably the most important feature for me as I often don’t commute to work on a specific train, but rather the one around 8:30, sometimes before and sometime after. So instead of going through the whole workflow to get the next time, I can quickly see the scheduled arrival. As the current API does not support storing this on the cloud, I will keep it in a small SQLite DB on the actual device, to be synced at a later date.
  3. Train Timetables
    This is the usual workflow for getting the times of the next train. The next image follows on from after the user selects a province that he/she is interested in.

Main Screen 

The next main chunk of application is the actual finding of the train timetable. This is going to be the same as the website in structure.

We get to this level after the user selects a province from the main screen. He/she then selects a line and then proceeds to select the departure and arrival stations. I would like to be able to specify the day and time of the trip as well, but I need to do some more research on the availability of this first. When the user is ready, all is needed is to tap the arrow button. The final screen is a list of the departure times with a few options:

  • Switch the direction of travel
  • Go to the earlier times
  • Go to the next set of times
  • Add this route to favourites

Timetable Workflow

This is just the preliminary designs and probably will change. If you have any comments, then just leave a comments below. Or you can vote.

My Microsoft Surface Pro

Earlier this week my new Surface arrived! It is very, very cool and I haven’t stopped using it since it arrived.

Being the Pro one, I installed Visual Studio 2012 already :). At work I did use it a bit, but I have not done any dev yet as I don’t have bluetooth keyboard yet.

The shipping costs where quite high due to me having to import it. I used Amazon, but had to use a Third Party vendor, WorldWide Distributors, as Amazon does not stock the tablet. (the total costs worked out to 2/3 tablet and 1/3 shipping and admin fees) I really feel that the cost was worth it. I can use it as my “normal” desktop as will as a tablet. I an not really a gamer, so the lack of a dedicated graphics card does not really affect me. I also think that the USB port on the side was a good move by Microsoft as I can now plug in my flash drive and/or external hard drive.

I am currently looking into a portable hard drive and a cable for my monitor (as the device has a mini DisplayPort and I have an old DVI monitor). I will probably buy a cheap bluetooth keyboard and mouse until I get one of the Touch/Type Covers (I am actually writing this with that pen).

I have an album on Facebook with some photos of my surface. Yay!

Well anyway, this is just an update, I still want to do some more blogging this months.

I am a contributor to Mono

I just remembered that I had previously made a pull request in the Mono project, and when I went to have a look today, I saw that it was accepted! Yay! Now I can say I have contributed to the coolest project in the world. I don’t know if those two lines really count, but at least I am part of the Mono team (at least I like to think so).

I suppose this is a way to realise that no matter how small a contribution to a project, endevour or anything, a contribution is really helping others. Maybe I will contribute some more in future, but at least now I will have that one line moved for the better.

I came accross this when I was creating my port for the Windows Phone / Silverlight.

New Domain Name for my Site

It’s all so exciting when one gets a new place to stay, or move to a new house, that’s why when I finally got my domain registered, my face is all lit up. I am not a super-blogger like some other great devs out there, but that is no reason why I shouldn’t be happy about getting my own little bit of space in the fluffy clouds of the Internet. My new domain is, if you are not already there: http://dotnetdevddict.co.za. Yes, it is a tad bit too long and not a very good one, but until I can think of a better one, it will have to do.

I am busy re-working my Mono.Data.Sqlite.Orm library for an app that I will be writing: a comic-book reader. I recently started reading comics again – I especially liked the older Batgirls (2000 – 2006 editions). After signing up with Comixology, I found that it is a good way to kill time when commuting to work. I decided that I needed a read for the train, and being the techy guy that I am, I decided to read electronic comics. Thus, this inspired me to create an app that will allow me to read my comics as well as any ones that I scan in, or get from other sites, on my way to work.

Not to be all about comics, I also did some reading on some very cool programming languages. Or should I say strange ones? My favourite so far is LOLCODE. What is LOLCODE? Check out this simple “Hello World!” app:

HAI
CAN HAS STDIO?
VISIBLE "Hello World!"
KTHXBYE

Check out CodeSchool’s online LOLCODE tutorials. Here is a more complex example:

HAI

  I HAS A ANIMAL
  GIMMEH ANIMAL

  BOTH SAEM ANIMAL AN "CAT", O RLY?
    YA RLY
      VISIBLE "U HAS A KITTEH"
    NO WAI
      VISIBLE "KITTEH R 2 GUD 4 U"
  OIC

KTHXBYE

I’ll explain what is happening line-by-line:

Start the app
-- Declare a variable "ANIMAL"
-- Request a value from the user and store it in "ANIMAL"
-- Compare equality of the variable "ANIMAL" with the literal "CAT"
-- Evaluates to True
-- -- Write to the screen
-- Evaluates to False
-- -- Write to the screen
-- Close the comparison block
End the app

What more can you want? It supports variables, functions, loops and even plugins! I am going to create a little app – probably clone one of my QBasic ones from the early days (These apps are actually ports from the old, old ZX Spectrum programs :D). One of the reasons I like this one is that it an compile to IL! Who says programming is boring?

Making the Wijmo Drop Down Listen

Wijmo is a awesome HTML/JavaScript toolkit that provides styled widgets for any website or web application. They provide great integration with the equally brilliant KnockoutJs.

I was using the Wijmo drop down control, but I found that there was something a bit strange: the control does not update it’s value when the observable changes its value. The control is actually a very thin wrapper for Knockout’s default binding for the <select> element. Knockout listens to the element’s updates and can update the element from the observable, but Wijmo does neither: it only can update the HTML that control is bound to. Thus, the value on the control is never updated at all; the observable’s value is updated because the widget updates the HTML and then Knockout updates the observable as it listens to the HTML.

This posed a problem as I would have to manually listen to the observable and then update the widget, this is boring and I should not have to do it. So I wrote a wrapper for it and extended the wijdropdown binding:

(function () {
    // override the wijdropdown to update the drop down display label when the selection changes
    var proxied = ko.bindingHandlers.wijdropdown.init;
    ko.bindingHandlers.wijdropdown.init = 
        function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {

        var options = allBindingsAccessor();
        var value = options.value;

        // subscribe to the changes and update
        if (ko.isObservable(value)) {
            value.subscribe(function () {
                $(element).wijdropdown("refresh");
            });
        }

        // continue as normal
        return proxied.apply(this, arguments);
    };
})();

The original code would have required me to listen to each observable with this code:

myObservable.subscribe(function () {
    $("#selectElement").wijdropdown("refresh");
});

But the extension does away with this and I can simply bind to the element:

<select data-bind="value: pageSize, wijdropdown: {}">

The reason that there is a “double” binding is that the default Knockout binding is sufficient to work the drop down, and the wijdropdown just adds the styling.

Changing Bing Stylesheets

I have never liked Bing’s results page style, it is a decided lack of coloring – or anything for that matter. So, using Google Chrome I tweaked the stylesheet.

Here is the before:

 

Here is the after:

 

And all it took was a little CSS:

#sw_hdr {
    border-bottom: #6190FF 1px solid;
    padding-bottom: 12px;
    background: #DBE6FF;
}
#id_h {
    height: 75px;
    background: #DBE6FF;
}
.sw_bd {
    background: #FFF;
}
.sb_form_go {
    background: #DBE6FF;
    height: 25px;
}
.sb_ph .sb_count {
    color: #777;
}
.sw_logo {
    background-image: url('http://www.logostage.com/logos/bing.png');
    background-color: #DBE6FF;
    background-size: 75px 26px;
}

It is just a tiny splash of colour and everything looks better. I use the different image because I wanted the transparency. Maybe Microsoft will change it a bit and then we can all celebrate by doing a few extra searches – maybe even click on some ads?

JavaScript Ajax & ASP.NET WebAPI

Recently I was working on a HTML/JavaScript web application and I was having one of those moments where everything was working fine except for one small thing: the server always seemed to receive NULL.

I tried the usual check-for-null-before-sending, but this was not the problem. Maybe it was breaking in the jQuery Ajax call? Is that even possible? 🙂 Everything was perfect, including when checking the request data with Internet Explorer’s Network Traffic Capturing Developer Tool. It was sending the data across. The JSON was valid and everything.

I decided it was the server. It was a basic ASP.NET WebAPI. All the GETs were working so why was the POST failing? I checked the ApiController’s http context request content. That was correct. The only thing that was wrong was the method’s object parameter value being NULL.

So what was it? The client was sending the right data and the server was receiving it, but the object was NULL.

Here is the JavaScript code:

$.ajax({
    url: '/api/ingredient',
    type: 'POST',
    data: JSON.stringify(ingredient),
    contentType: 'json',
    success: function() { ... },
    error: function() { ... }
});

That was perfect. Now on the server:

public class IngredientController : ApiController
{
    public void Post(IngredientDto ingredient)
    {
        // it failed here as &quot;ingredient&quot; was NULL
    }
}

After searching for some time and trying all sorts of things, I finally found where I went wrong. Now, we all know Microsoft for not being very standards compliant, I mean look at Internet Explorer before version 9, it was pretty glum times. But the problem lay with Microsoft being too standards compliant. The problem lay in the small string, “json”: it is not the right string. Of course if this was a strongly typed language, an enum based action, this would have never have happened. (Look out for my upcoming post on Type Safety)

The informal standard according to Internet Assigned Numbers Authority‘s (IANA) media types and The Internet Engineering Task Force‘s (IETF) RFC4627:

The official MIME type for JSON text is “application/json”.

Wow. What a waste of time. And of course, as soon as I changed the Ajax type from “json” to “application/json” everything JustWorked™.
So the new code is:

$.ajax({
    url: '/api/ingredient',
    type: 'POST',
    data: JSON.stringify(ingredient),
    contentType: 'application/json',
    success: function() { ... },
    error: function() { ... }
});

I hope this helps someone to avoid what I was doing: wasting time. But I did learn a few other things along the way, all was not lost.

Pre-Loading Silverlight Prism Modules & Assemblies

I was developing a Unity/Prism Silverlight enterprise application which used many shared assemblies and several modules. I wanted to show a ‘startup progress’ for the core assemblies so the application would have a better user experience, especially for the users’ first few moments of the application.

Problem

The Silverlight plugin downloader only reported the progress for the shell. User experience was very poor due to the lack of a real gauge of the total progress. Even though the entire application was about 7 MB, the shell was only 500 KB. This resulted in the built-in progress bar reaching 100% very quickly, way before the actual download had reached 10%. This was not really a problem on faster connections, but on slower connections, there was a long delay before the application really started. This delay exists because the shell is finished downloading, but Silverlight is still downloading all the other assemblies and modules. These other assemblies make up the main part of the application, and thus is more important compared to the shell.

So what I wanted to do was to find some way to hook into the actual Silverlight download requests of all the files and then use this instead of the built-in progress reporting. However there doesn’t seem to be any way to do this from JavaScript.

Solution

I know that almost all browsers support caching and subsequent loads from the cache are almost instantaneous. I can use this to my advantage. So what I did was to pre-download the files that would have been downloaded if I was using the built-in downloader. Then, when the Silverlight control started its own download, all the required files would be in the browser cache. This allows the application to no longer need to download the files, and the application startup would be very quick, thus not needing a Silverlight progress section here.

Implementation

This solution calls for several things:

  • I don’t want any hardcoded values, such as sizes, filenames or urls.
  • The splash screen should be seamless with the actual Silverlight application
  • JavaScript downloader should start and then switch to the Silverlight control once it is finished.

The first thing that I needed to do on the server was to create a way for my JavaScript code to get a list of all the files and their sizes that it needed to download. Just to keep things simple, I created a simple ASP.NET Generic Handler that returned a JSON string of all the files. Because Silverlight downloads the files in our “ClientBin” folder, my handler simply enumerates the files in this directory and returns this. I also didn’t want to build up this resulting string manually, so I used a DataContractJsonSerializer that serializes the array of files – I created the returning types as internal classes of the handler.

On the client, I can request the list then calculate the total size for the progress and start downloading. As each file finishes downloading, I can update the progress message and it will be more accurate to what is really happening.

The splash screen is a very simple HTML div that is displayed over the actual Silverlight object. The Silverlight object is actually hidden by default using CSS styles, to prevent it from automatically starting up. When my JavaScript reports that all the files are downloaded, I hide the splash screen div and show the Silverlight control. This provides a very accurate, user-friendly way of reporting progress. And when the download reaches 100%, all the required files are all finished and on the client.

Notes / Observations

I did have a problem when I pre-downloaded the shell xap file: Silverlight did not start the actual application. So to work around this I decided to downloadd all the files, except the shell and let Silverlight do this one. Now this caused slight problems in my download mechanism: First the progress would be off slightly and I would have no way of reporting this shell progress. Silverlight has JavaScript callbacks for the shell downloads: the onSourceDownloadProgressChanged function.

Because I know the size of the shell – from my handler, and I would have notifications from Silverlight about this file’s progress, I could then combine this with my progress message to get a nice value. So effectively, my download stops at 90% and then the Silverlight control will take over and notify me of the shell’s progress which I work out how far it should really be.

Currently I am using both the HTML element and the Silverlight mechanism for splash screens. For the first bit, I use the HTML element and when the Silverlight takes over, I hide it and the Silverlight (xaml) version shows. Both are exactly the same so the user does not notice the switch at all.

Code

This is the server-side handler. The Browser Cache mechanism for the urls are case sensitive, so remember to use the case that Silverlight is going to use for it’s requests.

public class BrandDirectorFilesHandler : IHttpHandler
{
    public void ProcessRequest(HttpContext context)
    {
        context.Response.ContentType = "text/plain";


        var serializer = new DataContractJsonSerializer(
            typeof(DownloadableFileResponse));
        serializer.WriteObject(
            context.Response.OutputStream, 
            GetFilesInRoot());
    }

    private static DownloadableFileResponse GetFilesRoot()
    {
        // logic to read the file list from the file system
    }
}

Here is the resulting objects that I serialize for the client.

[DataContract]
internal class DownloadableFileResponse
{
    [DataMember]
    internal IList<DownloadableFile> Files { get; set; }

    [DataMember]
    internal long TotalSize { get; set; }
}

[DataContract]
internal class DownloadableFile
{
    [DataMember]
    internal string Url { get; set; }

    [DataMember]
    internal long Size { get; set; }
}

This is the part of the whole system that controls the downloads of the files from the client side.

var totalSize = 0;
var progress = 0;
var xapFile = null;

// Silverlight default functions
function onSourceDownloadProgressChanged(sender, eventArgs) {
    if (xapFile === null) {
        // there may have been a problem obtaing the file list
        // so create a dummy state for the silverlight to take 
        // over from - these values are just the last known 
        // values from a previous run that I did
        totalSize = 7000000;
        progress = totalSize * 0.88;
        xapFile = {
            Size: 700000
        };
    }

    var shellProgress = progress + (eventArgs.progress * xapFile.Size);
    var shellPercent = shellProgress / totalSize;
    var text = Math.round(shellPercent * 100).toString();

    // this bit is for the actual Silverlight splash.xaml
    sender.findName("ProgressText").Text = text;
    // this is the html element
    $("#ProgressText").text(text);
    // I do both just to be sure
}

// the javascript for pre-caching the assemblies
$(function () {
    // get this apps root
    var urlParts = location.href.split('?');
    var mainUrl = urlParts[0];
    mainUrl = mainUrl.substring(0, mainUrl.lastIndexOf('/'));

    // download the list of files from the handler
    $.ajax({
        url: mainUrl + '/BrandDirectorFilesHandler.ashx',
        success: function (data) {
            var response = $.parseJSON(data);

            // set the total size for the silverlight bit
            totalSize = response.TotalSize;

            // don't download the shell xap file
            // get the xap filename from the Silverlight DOM element
            var shellXapPath =  
                $("#silverlightControl")
                .children()
                .filter(function (index, child) {
                    return child.name === "source"; 
                })[0].value;
            // find the shel in the list from our server
            xapFile =
                response.Files
                .filter(function (item) {
                    return shellXapPath.indexOf(item.Url) !== -1;
                })[0];

            // start downloads
            $(response.Files).each(downloadFile);
        },
        error: function () {
            // default to silverlight on any errors getting the list
            onAllDownloadsComplete();
        }
    });
});

// all downloads are complete, show the silverlight control
function onAllDownloadsComplete() {
    $("#silverlightControlHost").show();
    $("#htmlLoading").hide();
};

// initiate the download for each file
function downloadFile(index, file) {
    if (file !== xapFile) {
        $.ajax({
            url: new String(file.Url).toString(),
            complete: function () {
                // download of the file complete we can ignore 
                // errors, as Silverlight can deal with it
                progress += file.Size;
                var progress = Math.round(progress * 100 / totalSize);
                $("#ProgressText").text(progress.toString());

                if (progress >= totalSize - xapFile.Size) {
                    onAllDownloadsComplete();
                }
            }
        });
    }
}

Systems & Validation

At Jam Warehouse we had an interesting problem: Data Validation. This in itself is not all that interesting, but if you think about it, what does this really mean? According to Wikipedia:

data validation is the process of ensuring that a program operates on clean, correct and useful data…
And in the case of BrandDirector, not only do we prevent the user from saving invalid data in the database, but also ensure that we don’t do any processing with invalid data.

Ah, now we know what to do! Just put lots of checks in the system. Checks like:

  • ‘if number > 100 or < 0, then show an error'
  • 'if text is not an email address, then show an error'

But, what if the business model consists of hundreds of objects? This is going to get very cluttered and very difficult to maintain. So obviously we need some sort of structure in our code that allows us to create our models without clogging up the code files with numerous bits of check logic.

But before we start to create that awesome code, we need to know what we are working with: BrandDirector is a large client-server system with many domain objects. This is a web based system that effectively allows many users to input data and then it gets saved onto the server. (This is of course a gross understatement of what BrandDirector actually does, but that is not the problem.) I am needing to ensure that the data from the user, before processing and/or saving, is both correct and clean.

Data validation, from a user's perspective, ensures that any data/information from the system is both reliable and useful for business. But, I am not a user, I am going to do the implementing. What I need to do is provide myself a way of adding validation to all those objects without messing up my immaculate code (at least that's what I think). Obviously, we want to tell the user when the data is invalid and instruct him on how to correct it. So, what we are working to provide is: clean code, clean data and error messages.

What we want

Now, we are a C# shop and we like to use all those great features of the C# language, such as attributes and auto-properties. For those that aren't C# fans like me, here is a sample of what I want to write:

// Model
public class Ingredient
{
    [MaximumLength(50)] // <- attribute validation rule
    public string Name { get; set; } // <- auto property
}

This is what I would have written for each model property, if it hadn't been for the frameworks as we will see soon:

// Model
public class Ingredient
{
    private string name;
    public string Name
    {
        get { return name; }
        set
        {
            name = value;
            if (name.Length > 50)
                AddError("Name", "Name must be less than 50 chars");
        }
    }
}

As you can see, It is way more code and just ugly. The first version is both neat and does almost exactly the same thing. It allows the user to set the value and then the UI will display the error message if need be. I say 'almost' because the first does no checking (at least not yet). How are we going to get that checking into the first class? Well, we can use some great frameworks out there that does not need us to change our code at all, but does the checking.

Solving the problem

The two frameworks that are needed are FluentValidation and PostSharp. FluentValidation is a framework that allows us to create rules for a particular type of object and then provides a means to validate an object on those rules. This means is called a 'Validator':

// IngredientValidator validates an object of type 'Ingredient'
public class IngredientValidator : AbstractValidator<Ingredient>
{
    public IngredientValidator()
    {
        // The neatly allows us to create a rule for 'Name'.
        RuleFor(x => x.Name).MaximumLength(50);
    }
}

Using the validator allows us to write a very neat and easy-to-read section of code:

public void SendIngredientToServer(Ingredient myIngredient)
{
    var validator = new IngredientValidator();
    var result = validator.Validate(myIngredient);

    if (result.IsValid)
        SendIngredient(myIngredient);
    else
        ShowErrorMessages(result.Errors);
}

But I don't want to have to do even this small check every time I press the save button. I want the check to run every time I change the properties as well as when I press save. And what I really want is that save button to be disabled when the data is invalid. This is where PostSharp is a really useful. It allows us to modify the compiled assembly and then insert all the checks for us on each property. (We create 'Aspects' that allows us to write the boilerplate code that is applied to each property) This will cause the validator to be run every time the properties' value changes. Now all we have to do is this:

public void SendIngredientToServer(Ingredient myIngredient)
{
    SendIngredient(myIngredient);
}

I can do this with confidence, knowing that my UI will never allow invalid data to ever reach the saving part. All the save buttons will be disabled and error messages will be alongside all the invalid data controls. And, if somehow we manage to get invalid data into the actual save action, the server will also do the validation before doing the actual save to the database. But more of the server later.

On the client

PostSharp will modify all my auto-properties and add the necessary checks into the setters. If any errors are found, the UI is informed. The UI will then respond and show the error messages and disable the save button. But, even in all of this, I still have to write the validators and this requires work for maintaining two separate pieces of logic. All my related things must be in one file. What we currently have is the Ingredient class and the IngredientValidator. PostSharp does the work of adding the checks and the UI does the messages, but still I need to manually create the validators.

Now, this is what I was really working on: The part that generates the validators. One attribute is far shorter than writing a rule. Using this reasoning, I then apply all the combinations of attributes to the appropriate properties. A T4 Template is now used to read the attributes off each model, or in this case the DTO, and then generate the equivalent Validator.

So I have 3 things now:

  1. The Model/DTO that I write with my properties and their attributes
  2. The Validators that is generated from reading the Model attributes
  3. The PostSharped assembly with the injected validation checks

This is all very exciting, as I only have written the one part, the Model. And then all the bits and pieces are put together to create the equivalent of the big and ugly piece of code; one property and one attribute produces almost everything (at least on the client) I need.

The server

Now, as with all client-server systems, data goes across the wire. It first gets downloaded for the user to edit and then the changes are uploaded to the server. It is useless to put the error messages on the server as the user will never see them, and it is unwise to put the validation on the client as can be seen by the fact that we have pirate software. Never trust the user. We can reach the conclusion that we need validation on the client, for those error messages, and also on the server, just to make sure that the data is in fact valid.

This now brings in a problem of duplication. The models on the client are DTOs that are a small subset of the domain model. They have the same need of validation as they are used by the UI. As the DTOs on the client are not the same as the models on the server, we can't reuse the code. we are going to have to re-write it in the way that the client part needs it to be. One of the ways that I chose to solve this problem is by copying the rules. We can do the traditional way of copy-and-paste, but that is practically asking for disaster. Developers will, at some point in time, forget to update either the Model or the DTO. Or something else even worse will happen, such as only adding validation to the client and not the server. This is where the T4 Template is very helpful. It can read the validation off one model and merge them with the ones on on the model that we are actually creating the validator for.

For example, we have:

  • one Model, say Ingredient, and
  • two DTOs, IngredientNameDto and IngredientSupplierDto.
  • The Ingredient Model has, among others, two properties: Name and SupplierName.
  • And the Dtos have a property Name and SupplierName respectively.

We want to add the validation to only one model, Ingredient, and then have the validators generated for all three objects. The way I achieved this was to add a single attribute to the DTOs that specified which type of Model to get the validation rules from, in this case Ingredient. Using this way of providing validation almost does everything for us. And just to show what we do in code (a super simplified model):

// Domain Model on the server
public class Ingredient
{
    [MaximumLength(100)]
    public string Name { get; set; }

    [MaximumLength(50)]
    public string SupplierName { get; set; }

    // other properties here ...
} 

// shared across the client and server
[CopyValidation("BrandDirector.Models.Ingredient")]
public class IngredientNameDto
{
    public string Name { get; set; }

    // other properties here ...
}

[CopyValidation("BrandDirector.Models.Ingredient")]
public class IngredientSupplierDto
{
    public string SupplierName { get; set; }

    // other properties here ...
}

What I haven't said yet, is that the Validators are in a different assembly to the Models. This is because the T4 Template reads the compiled assembly in order to generate the Validators. So the order of actions is really: write model, compile model, generate validators. As you can probably see, the model is compiled before the validators are actually created, so we can't reference the validators directly from the model. What we have is a Registry of all the validators available to the client or server. Therefore, we register the validator assembly when the app starts up and then find the validator when we need it. Here is an example of what the PostSharp does for us:

// Model
public class Ingredient
{
    private string name;
    public string Name
    {
        get { return name; }
        set
        {
            name = value;
            var validators = ValidatorRegistry.FindValidatorsFor<Ingredient>();
            var results = validators.SelectMany(v => v.Validate(this));
            // Do what needs to be done with the result
        }
    }
}

Depending on whether it is on the server or the client, the appropriate action is taken. If it is on the server, we throw an exception if the results are invalid. This is to totally prevent any invalid data from actually reaching the model itself. The exception is then sent back to the client and then handled there; all processing on the server now stops. On the client, we just add an error message to the list of errors that is displayed onscreen. Because the app itself and the server knows where the validators are, we can register them when the server or app starts:

public void OnAppStartup()
{
    ValidatorRegistry.RegisterValidators(typeof(Ingredient).Assembly);
}

So, by utilizing existing frameworks, we can reduce the amount of code that we as developers write. This enables the developers to spend more time writing the really cool bits of code and not repetitively doing the same thing.