Tasks: ContinueWith on UI Thread

Sometimes we need to perform a task in the background, but then we want to update the UI when it is completed.

We can make use of a Task and the ContinueWith method so that we can chain tasks together. To ensure that a task is performed on a specific thread, we specify which TaskSheduler to use. To get the UI thread scheduler, we can use TaskScheduler.FromCurrentSynchronizationContext() when we are on the UI thread:

If we are going to run a background task:

var uiThread = TaskScheduler.FromCurrentSynchronizationContext();
Task.Run(() => {
    // do some work on a background thread
}).ContinueWith(task => {
    // do some work on the UI thread
}, uiThread);

If we need to carry a result from the previous background task:

var uiThread = TaskScheduler.FromCurrentSynchronizationContext();
Task.Run(() => {
    // do some work on a background thread

    // return a value for the next task
    return true;
}).ContinueWith(task => {
    // do some work on the UI thread

    if (task.Result) {
        // make use of the result from the previous task
    }
}, uiThread));

MonoGame Content without Visual Studio

MonoGame is free software used by game developers to create games for many different platforms. It is almost a Write Once, Play Anywhere!

Unfortunately, the content processing pipeline is not yet available for all platforms or even the later versions of Visual Studio. Here I will show you a way to build the content for any version of Windows, without Visual Studio.

Platforms

Currently supported platforms for MonoGame:

  • iOS (including Retina displays)
  • Android
  • Windows (OpenGL & DirectX)
  • Mac OS X
  • Linux
  • Windows Store Apps (for Windows 8 and Windows RT)
  • Windows Phone 8
  • PlayStation Mobile (currently 2D only)
  • OUYA, an Android-based gaming console

Currently Supported platforms for XNA:

  • Windows Phone 7
  • Xbox 360
  • Microsoft Windows

Support for Xbox One is currently under way with both Microsoft and the MonoGame teams. Microsoft is adding .NET support to the Xbox One and the MonoGame team is adding Xbox One support to MonoGame.

Content Processors

When creating games using MonoGame, there are 2 main parts to any game: the Content and the Code. The content is usually the textures, sounds and fonts in the game. The code is what you write, the logic.

At the current time MonoGame does not have its own content processors, so we will make use of the original XNA build tools. The MonoGame team is working on their tools, but it is not yet complete.

In order to process the content, we need two things: the processor tools and some sort of UI.

Installing the Content Pipeline

We will start off by setting up our content tools before we actually do anything. First we need the assemblies that come with XNA Game Studio. This is the toolset used for building the content that will appear in our game. The actual studio does not install on without Visual Studio 2010, so we have to cheat a bit.

First of all, we need to download XNA Game Studio 4.0 Refresh from Microsoft’s Download Center. Once this is complete, we will load the framework installers out of the studio setup file:

  1. Using 7-zip (or any other compression tool), open the newly downloaded XNAGS40_setup.exe.
  2. Inside the installer, there should be a redists.msi file, open it using “Open Inside” as we don’t want it to start installing.
  3. Extract the files named SharedFilesInstaller_File, XNAFXRedist40Setup_File and XNAPlatformToolsInstaller_File into a directory.
  4. Rename the three extracted files by adding a .msi extension in Windows Explorer, this “turns” them into installers.
  5. Install each of them one at a time.

Once this is done, we would have installed all the build tools required to package the content.

Installing the Interface

Next, we need to install the XNA Content Compiler. This allows the building of the content packages when not using Visual Studio 2010.

You can do this by downloading the XNA 4.0 Content Compiler source code from my fork. I have added some extra features that allow for more advanced content processing, such as, Compression and MipMap generation.

Once you have this, you should be able to open the solution in Visual Studio and build the application. Currently the compiler can only be used on Windows as the tooling is only available on Windows.

Systems & Validation

At Jam Warehouse we had an interesting problem: Data Validation. This in itself is not all that interesting, but if you think about it, what does this really mean? According to Wikipedia:

data validation is the process of ensuring that a program operates on clean, correct and useful data…
And in the case of BrandDirector, not only do we prevent the user from saving invalid data in the database, but also ensure that we don’t do any processing with invalid data.

Ah, now we know what to do! Just put lots of checks in the system. Checks like:

  • ‘if number > 100 or < 0, then show an error'
  • 'if text is not an email address, then show an error'

But, what if the business model consists of hundreds of objects? This is going to get very cluttered and very difficult to maintain. So obviously we need some sort of structure in our code that allows us to create our models without clogging up the code files with numerous bits of check logic.

But before we start to create that awesome code, we need to know what we are working with: BrandDirector is a large client-server system with many domain objects. This is a web based system that effectively allows many users to input data and then it gets saved onto the server. (This is of course a gross understatement of what BrandDirector actually does, but that is not the problem.) I am needing to ensure that the data from the user, before processing and/or saving, is both correct and clean.

Data validation, from a user's perspective, ensures that any data/information from the system is both reliable and useful for business. But, I am not a user, I am going to do the implementing. What I need to do is provide myself a way of adding validation to all those objects without messing up my immaculate code (at least that's what I think). Obviously, we want to tell the user when the data is invalid and instruct him on how to correct it. So, what we are working to provide is: clean code, clean data and error messages.

What we want

Now, we are a C# shop and we like to use all those great features of the C# language, such as attributes and auto-properties. For those that aren't C# fans like me, here is a sample of what I want to write:

// Model
public class Ingredient
{
    [MaximumLength(50)] // <- attribute validation rule
    public string Name { get; set; } // <- auto property
}

This is what I would have written for each model property, if it hadn't been for the frameworks as we will see soon:

// Model
public class Ingredient
{
    private string name;
    public string Name
    {
        get { return name; }
        set
        {
            name = value;
            if (name.Length > 50)
                AddError("Name", "Name must be less than 50 chars");
        }
    }
}

As you can see, It is way more code and just ugly. The first version is both neat and does almost exactly the same thing. It allows the user to set the value and then the UI will display the error message if need be. I say 'almost' because the first does no checking (at least not yet). How are we going to get that checking into the first class? Well, we can use some great frameworks out there that does not need us to change our code at all, but does the checking.

Solving the problem

The two frameworks that are needed are FluentValidation and PostSharp. FluentValidation is a framework that allows us to create rules for a particular type of object and then provides a means to validate an object on those rules. This means is called a 'Validator':

// IngredientValidator validates an object of type 'Ingredient'
public class IngredientValidator : AbstractValidator<Ingredient>
{
    public IngredientValidator()
    {
        // The neatly allows us to create a rule for 'Name'.
        RuleFor(x => x.Name).MaximumLength(50);
    }
}

Using the validator allows us to write a very neat and easy-to-read section of code:

public void SendIngredientToServer(Ingredient myIngredient)
{
    var validator = new IngredientValidator();
    var result = validator.Validate(myIngredient);

    if (result.IsValid)
        SendIngredient(myIngredient);
    else
        ShowErrorMessages(result.Errors);
}

But I don't want to have to do even this small check every time I press the save button. I want the check to run every time I change the properties as well as when I press save. And what I really want is that save button to be disabled when the data is invalid. This is where PostSharp is a really useful. It allows us to modify the compiled assembly and then insert all the checks for us on each property. (We create 'Aspects' that allows us to write the boilerplate code that is applied to each property) This will cause the validator to be run every time the properties' value changes. Now all we have to do is this:

public void SendIngredientToServer(Ingredient myIngredient)
{
    SendIngredient(myIngredient);
}

I can do this with confidence, knowing that my UI will never allow invalid data to ever reach the saving part. All the save buttons will be disabled and error messages will be alongside all the invalid data controls. And, if somehow we manage to get invalid data into the actual save action, the server will also do the validation before doing the actual save to the database. But more of the server later.

On the client

PostSharp will modify all my auto-properties and add the necessary checks into the setters. If any errors are found, the UI is informed. The UI will then respond and show the error messages and disable the save button. But, even in all of this, I still have to write the validators and this requires work for maintaining two separate pieces of logic. All my related things must be in one file. What we currently have is the Ingredient class and the IngredientValidator. PostSharp does the work of adding the checks and the UI does the messages, but still I need to manually create the validators.

Now, this is what I was really working on: The part that generates the validators. One attribute is far shorter than writing a rule. Using this reasoning, I then apply all the combinations of attributes to the appropriate properties. A T4 Template is now used to read the attributes off each model, or in this case the DTO, and then generate the equivalent Validator.

So I have 3 things now:

  1. The Model/DTO that I write with my properties and their attributes
  2. The Validators that is generated from reading the Model attributes
  3. The PostSharped assembly with the injected validation checks

This is all very exciting, as I only have written the one part, the Model. And then all the bits and pieces are put together to create the equivalent of the big and ugly piece of code; one property and one attribute produces almost everything (at least on the client) I need.

The server

Now, as with all client-server systems, data goes across the wire. It first gets downloaded for the user to edit and then the changes are uploaded to the server. It is useless to put the error messages on the server as the user will never see them, and it is unwise to put the validation on the client as can be seen by the fact that we have pirate software. Never trust the user. We can reach the conclusion that we need validation on the client, for those error messages, and also on the server, just to make sure that the data is in fact valid.

This now brings in a problem of duplication. The models on the client are DTOs that are a small subset of the domain model. They have the same need of validation as they are used by the UI. As the DTOs on the client are not the same as the models on the server, we can't reuse the code. we are going to have to re-write it in the way that the client part needs it to be. One of the ways that I chose to solve this problem is by copying the rules. We can do the traditional way of copy-and-paste, but that is practically asking for disaster. Developers will, at some point in time, forget to update either the Model or the DTO. Or something else even worse will happen, such as only adding validation to the client and not the server. This is where the T4 Template is very helpful. It can read the validation off one model and merge them with the ones on on the model that we are actually creating the validator for.

For example, we have:

  • one Model, say Ingredient, and
  • two DTOs, IngredientNameDto and IngredientSupplierDto.
  • The Ingredient Model has, among others, two properties: Name and SupplierName.
  • And the Dtos have a property Name and SupplierName respectively.

We want to add the validation to only one model, Ingredient, and then have the validators generated for all three objects. The way I achieved this was to add a single attribute to the DTOs that specified which type of Model to get the validation rules from, in this case Ingredient. Using this way of providing validation almost does everything for us. And just to show what we do in code (a super simplified model):

// Domain Model on the server
public class Ingredient
{
    [MaximumLength(100)]
    public string Name { get; set; }

    [MaximumLength(50)]
    public string SupplierName { get; set; }

    // other properties here ...
} 

// shared across the client and server
[CopyValidation("BrandDirector.Models.Ingredient")]
public class IngredientNameDto
{
    public string Name { get; set; }

    // other properties here ...
}

[CopyValidation("BrandDirector.Models.Ingredient")]
public class IngredientSupplierDto
{
    public string SupplierName { get; set; }

    // other properties here ...
}

What I haven't said yet, is that the Validators are in a different assembly to the Models. This is because the T4 Template reads the compiled assembly in order to generate the Validators. So the order of actions is really: write model, compile model, generate validators. As you can probably see, the model is compiled before the validators are actually created, so we can't reference the validators directly from the model. What we have is a Registry of all the validators available to the client or server. Therefore, we register the validator assembly when the app starts up and then find the validator when we need it. Here is an example of what the PostSharp does for us:

// Model
public class Ingredient
{
    private string name;
    public string Name
    {
        get { return name; }
        set
        {
            name = value;
            var validators = ValidatorRegistry.FindValidatorsFor<Ingredient>();
            var results = validators.SelectMany(v => v.Validate(this));
            // Do what needs to be done with the result
        }
    }
}

Depending on whether it is on the server or the client, the appropriate action is taken. If it is on the server, we throw an exception if the results are invalid. This is to totally prevent any invalid data from actually reaching the model itself. The exception is then sent back to the client and then handled there; all processing on the server now stops. On the client, we just add an error message to the list of errors that is displayed onscreen. Because the app itself and the server knows where the validators are, we can register them when the server or app starts:

public void OnAppStartup()
{
    ValidatorRegistry.RegisterValidators(typeof(Ingredient).Assembly);
}

So, by utilizing existing frameworks, we can reduce the amount of code that we as developers write. This enables the developers to spend more time writing the really cool bits of code and not repetitively doing the same thing.

Trying to Blog… and an update

It has been a very long time since I started (or attempted to start) blogging. I hope to really get down to doing this again. What might help is the fact that my company is re-working their website and is asking the developers to start posting an article every couple of months.

Anyway, I have been very busy with my studies, work and other small projects on my GitHub account (github.com/mattleibow). I have several repos, but my cooler ones are the Mono.Data.Sqlite.Orm and the Android.Play.ExpansionLibrary.
The Mono.Data.Sqlite.Orm repo is a small and light-weight O/RM for the Sqlite library on the various platforms.

The initial idea was from Frank A. Krueger and his sqlite-net, but I re-wrote almost the entire library using the Mono.Data.Sqlite assembly instead of his custom types. I also had to port the Mono.Data.Sqlite assembly from the Mono repository to use the C#-SQLite library instead of the sqlite.dll native library.

I also want to create some more docs for this library, so I just set up the GitHub Pages for this repo: http://mattleibow.github.com/Mono.Data.Sqlite.Orm. I will see if I can use this.

The Android.Play.ExpansionLibrary repo is the C# equivalent of the Google provided Java LVL and Expansion libraries. This was, and is to an extent, still an almost a direct translation from Java to C#, but I think it has a few improvements. Such as, a nicer storage system.

I also got my first Android App onto Google Play:
https://play.google.com/store/apps/details?id=com.jamwarehouse.apps.puppy.trial!
This was very exciting and I am very glad that I got this opportunity.

Now I am going to try and edit that GitHub Pages for my repo… But before that, I am going to add some images so that when I select the other layouts, I will get a picture 🙂

Cross-Platform Mobile Apps

One of my current projects at Jam Warehouse is to port the app, You & Your Puppy, from the iPhone/iPad to the Android devices (Smart-phones/Tablets). Over the month of November, I have completed the first version and is now undergoing review at our client, DogsTrust.

The App

It is my first ever Android app as well as my first ever mobile application. The current version that is being tested is written in Java. It is also my first Java application as well. (Lots of firsts 🙂 I then re-created the app using MonoDroid using C# – this is still only been tested in the emulator as I haven’t bought the MonoDroid licence yet. I am currently porting the app to the Windows Phone as well. I opted to use the MonoDroid/MonoTouch instead of native languages for several reasons:

  • I am a total .NET addict – as you can see by the title of this blog 🙂
  • Code re-usability – Write once, test once, deploy n times, get n money!
  • Automatic memory management – who wants to dispose objects manually anyway?
  • Make use of advanced C# features and short-cuts – I can churn out some serious code…
  • And who wants to learn Objective-C or use Eclipse – C# and Visual Studio for me!

The Code

Code re-usability is really cool if you are having to write apps for 4 or more different platforms:
  1. Java for Android
  2. Objective-C for iPhone
  3. C# for Windows Phone
  4. Java for Blackberry
  5. HTML 5 for everything else
We could just use PhoneGap or something similar for all the devices, but why make it too easy? Lets go native! C# for Android/iPhone/Windows Phone and HTML5 for all the other players in the game 🙂
We all know (or at least I hope we do) that the iPhone has a “slightly” different layout to the Android phone. So, how do I get this “write once” thing in the list? Well, I didn’t tell the whole story as you can’t really 🙂

The Database

I do have an element of “write once”, but not for the entire app, just the non-visual part. As all the versions are going to be reading from a database and showing images/videos, we can use the same code for this part.
But there’s a catch, as always, what database do we use? We can use xml files for the data, but who wants to? We are advanced guys here, we use a database when a database is “needed”, just now we have to expand this app. As the original database was a SQLite one, so I used that. No complaints there as it is one of the best (if not the best) for mobile development. But now there’s another catch – this time with Microsoft: Windows Phones don’t have SQLite. And you can’t put the C++ version on the device either. So what do we do? Do we use a different database especially for this? No! we use the C# version called System.Data.Sqlite – much easier than maintaining two databases 🙂 – Don’t we all love this thing called C#?

The Data

OK, so we have the database. Now we need to read this data. We can of course write SQL queries and use the SqliteCommand classes, but who wants to use strings in a strongly typed language? Not me. After a little bit of research a found SQLite-NET a very small ORM for the iPhone. I was saved from strings! It also works on the Android phones! But maybe you noticed that I didn’t say anything about the Windows Phone, That’s right it doesn’t work! – Yup, Microsoft is making things hard for me, but I will not yield, not even to the global monopolistic software company that churns in the billions hourly! No! I re-write the SQLite-NET library (or just tweak it slightly). SQLite-NET was originally using the [DllImport] attributes (P/Invoke) to gain access to the sqlite3.dll that is stored somewhere on the filing system on each device. But for those who don’t know, no P/Invoke on the Windows Phone.
I converted the ORM to use the native SqliteCommand and SqliteConnection instead of the P/Invokes. I don’t actually know why the guys recreated those two classes when they could have just used them instead. Maybe there’s something I don’t know 🙂 I practically halved the code and reduced the amount of possible bugs.
So, now we are up and running with the database, a strongly typed ORM and I can use LINQ to SQLite as well!

The UI

Now we get to the basic functionality of the app, showing the information to the user! This is where the not-so-well-documented MonoCross library comes into play. (I’m hoping to buy this book Professional Cross-Platform Mobile Development in C# that’s supposed to show me the best way to use the framework). All the navigation between screens is taken care of and all getting the data ready for showing is done. Now, the user wants to click (or touch) on something, that’s why they bought the app in the first place :). Here’s the “snag”. We want cross-platform, but we also want the app to look like it was especially designed for each device – although it wasn’t. So we recreate each GUI screen for each platform, but we don’t use any non-platform specific code and if we have to, we abstract it!
In order to get the UI updates and things to happen, I subscribe to events that are raised on the Model, and then perform the necessary tasks to show each page to the user. I also catch all the events from the UI and perform the respective actions on the Model. For example, when the user clicks a link/button the command is sent to my middle-ground library. This library checks to see if the command is a navigation command. If it is, it then notifies the MonoCross framework that we want to navigate to wherever. If it is an action command, we inform the Model of the action we want to perform. The model is then updated, and the Model then fires an event that the UI will know that it has to change.
When we navigate, each screen has an abstracted helper class that receives the command as a parameter, which is then processed and used to inform the Model of the actions required (if any). When the actions are finished, the new screen is displayed.

The Diagram

Here is a basic diagram of the flow of the app from the time the user presses start and just how little code is actually needed for the platform specific areas.
As this diagram shows, most of the code is re-used.
As this diagram shows, most of the code is re-used.
And to prove it (mostly to myself), I was able to port the basic functionality of the Android version of the app to the Windows Phone in about 2-3 hours. There was almost no need for changes to the shared code. A bit of tweaking did improve it, but all the platforms – and future ones – will benefit.

Transparent WebView on Android Devices

I’m trying to re-write a java android app into Xamarin.Android, however I have come across an issue with the background transparency of the WebView that I use to display the contents of each screen.

This code works correctly on the java version (Black text on a green background), but in the C# version, the WebView’s background is black (black rectangle on the green background).

Java Code:

    @Override public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        LinearLayout layout = new LinearLayout(getApplicationContext());
        layout.setBackgroundColor(Color.GREEN);
        WebView webView = new WebView(getApplicationContext());
        layout.addView(webView);
        setContentView(layout);

        webView.getSettings().setJavaScriptEnabled(true);
        webView.setBackgroundColor(Color.TRANSPARENT);
        String html = "<html><body style='background-color: transparent;'>Some text...</body></html>";
        webView.loadData(html, "text/html", "UTF-8");
    }

C# Code:

    protected override void OnCreate(Bundle bundle)
    {
        base.OnCreate(bundle);

        var layout = new LinearLayout(ApplicationContext);
        layout.SetBackgroundColor(Color.Green);
        var webView = new WebView(ApplicationContext);
        layout.AddView(webView);
        SetContentView(layout);

        webView.Settings.JavaScriptEnabled = true;
        webView.SetBackgroundColor(Android.Graphics.Color.Transparent);
        string html = "<html><body style='background-color: transparent;'>Some text...</body></html>";
        webView.LoadData(html, "text/html", "UTF-8");
    }

There is also a ‘Android.Resource.Color.Transparent’, but don’t use this one as values returned from the Android.Resource namespace are resource ids, not colours.