Wednesday, November 14, 2012

Code Refactoring - Improving speed without changing external behavior


As part of our Contract Management solution Contract Guardian, we have developed a reporting tool to enable us to enter complex search terms and fetch contracts satisfying the search terms. Our Rippe & Kingston development team wanted this to be a tool that we could use on any of our custom projects and products.  This tool was developed using ExtJS as the front end and C# ASHX pages to process the query. This sounds really simple except when it was used in real situations that fetch many thousand contracts. When this project was deployed to a client that had more than 5000 contracts, we got the dreaded IE error “A script on this page is causing Internet Explorer to run slowly”.

On further research we determined that a script on our program was taking a long time to finish executing. This is a note from Microsoft1 for Internet Explorer  - Because some scripts may take an excessive amount of time to run, Internet Explorer prompts the user to decide whether they would like to continue running the slow script. Some tests and benchmarks may use scripts that take a long time to run and may want to increase the amount of time before the message box appears. "

Now there were a couple of options to resolve this error. One of the options provided in the Microsoft Knowledge Base includes modification of a registry value on client machines to wait for more time before throwing the notification message. That option would introduce issues during mass deployment across large client installations.  Further, the same script could also be running slower in other browsers as well.

The other option was to go back to the drawing board and refactor2 the script that was causing the issue.  Refactoring is basically recoding the program without affecting the end result of the program.

As noted, the front end was ExtJS . This consists of a Contract grid (A ExtJS Grid Panel) with default columns. The users have options to choose the display fields that will be displayed as columns on the grid.   These display fields are saved on the Cache engine behind our reporting tool.  The Cache saves all the filters (Departments, Companies, Users, Contract Types) used to filter the contracts and the display fields that show up on the Contract Grid.

 Our script calls a corresponding ASHX page that processes the query and the query in turn uses the Cache to get the fields. The filters selected/entered by the user generates the JSON object to be returned to the script. The JSON object consists of the headers that will recreate the ExtJS Grid.Panel columns and the data that will go to the ExtJS store that populates the grid.  

Since the fields are not known until run time, the script was processing every header field, recreating columns, filters, creating a store, paging tool bar and many other things and finally adding the data to the store in the Grid Panel. This took a lot of time.
// Comment


function populateContractGrid(data) {

   // Code omitted .....
        reconfigure_grid_test(newHeaders, newHeaderTypes, newData, dataStore);
   }

//pass the information to the next method from here
function reconfigure_grid_test(headers, types, data, dataStore) {
    // Code omitted .........
    reconfigure_grid(headerFields, headerTypes, dataFields, dataStore);
}
 
function reconfigure_grid(fields, types, data, dataStore) {
    // Code omitted...
    //loop will create grid columns and grid fields, adding them to the above arrays
            
        // Code omitted 
        columnArray.push(column);

       // Code omitted 
        fieldArray.push(field);
    }

    //build the column model that will be used in the contract grid
   
    // Code omitted
    var reader = new Ext.data.ArrayReader(
    {
        totalProperty: ''
    }, record);

    var memoryProxy = new Ext.ux.data.PagingMemoryProxy(data);

    var store3 = new Ext.data.ArrayStore(
    {
        remoteSort: true,
        reader: reader,
        fields: fieldArray,
        baseParams:
        {
            lightWeight: true,
            ext: 'js'
        },
        data: data,
        cm: columnModel,
        proxy: memoryProxy
    });

   // Create the filters for the grid
   // Code omitted
    pagingTBar.bindStore(store3, true);
    contractGrid.reconfigure(store3, columnModel);

    if (contractGrid.plugins) {
        for (var index = 0; index <= contractGrid.plugins.length; index = index + 1) {
            contractGrid.plugins.pop();
        }
    }

    filterPlugin.init(contractGrid);
    summary.init(contractGrid)
    contractGrid.plugins.push(filterPlugin);
    contractGrid.plugins.push(summary);

   
    // Code omitted
}


Our solution to this was to recreate the Grid Panel before fetching the JSON data from the  Query_Handler ASHX page and then replacing the store in the Ext.Grid.Panel with the store recreated from the JSON Object returned by the ASHX page.

So basically when the user selected the fields for the Grid, the fields were saved in the Cache by the DisplayFields_ASHX page and in return a JSON Object with empty data rows was returned by this ASHX page. Our script used this JSON Object to run the code displayed above with empty data (instead of more than 5000 records). 

Then when the query was run, the store of the ExtJS Grid Panel was just replaced with the new store returned by the JSON object without recreating the grid. That resulted in a dramatic change and the script was much faster in IE. Instead of calling the original code, we called a new function demonstrated below, where the store in the Contract Grid was replaced with the data from the JSON object.
// Comment
function populateContractGridWithData(data) {
  // Code Omitted
    var newData = [];
    for (var i = 0; i < arrayRows.length; i++) {
        newData[i] = arrayRows[i].value;
    }
    var store4 = contractGrid.getStore();
    var memoryProxy = new Ext.ux.data.PagingMemoryProxy(newData);
    store4.proxy = memoryProxy;
    pagingTBar.bindStore(store4, true);

    store4.load(
    {
        params:
        {
            start: 0,
            limit: 100
        }
    });



This design provided better user experience and also minimizes the need on system resources.

In summary, for slow operation of any code, one of the best options to resolve these options is to go back to the drawing board and refactor the code to improve speed without changing the external result or behavior of the code.

For more information check our website

References:

Tuesday, November 13, 2012

Advanced Techniques with Database Migrations

While Entity Framework's database migrations will automatically pick up on structure changes, there are times where we want to do a little bit more. Consider the following model:


public class Attachment
{
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime CreatedOn { get; set; }
    public DateTime ModifiedOn { get; set; }
    public string Path { get; set; }
    public long Size { get; set; }
    public string MimeType { get; set; }
}


We'll assume that this model has already been added to the database via a previous migration. Let's say that we'd like to give users the ability to change the file in a given attachment. More than that, let's say that instead of just keeping track of the current file, we would like to have a history of all files that a user has uploaded for versioning. We'll modify our model:

public class Attachment
{
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime CreatedOn { get; set; }
    public DateTime ModifiedOn { get; set; }

    public virtual List<AttachmentFile> AttachmentFiles { get; set; }
}

public class AttachmentFile
{
    public int Id { get; set; }
    public DateTime CreatedOn { get; set; }
    public string Path { get; set; }
    public long Size { get; set; }
    public string MimeType { get; set; }

    public Attachment { get; set; }
    public int AttachmentId { get; set; }
}


Instead of the Attachment model containing information about the file, it has a one-to-many relationship with AttachmentFiles. We will assume that the AttachmentFile with the latest date will be used as the "primary" attachment.

Now we need to carry this change to our database. Using the package manager console, we run "add-migration AddAttachmentFiles", and we come up with this:

public partial class AddAttachmentFiles : DbMigration
{
    public override void Up()
    {
        CreateTable(
            "AttachmentFiles",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    CreatedOn = c.DateTime(nullable: false),
                    Path = c.String(),
                    Size = c.Single(nullable: false),
                    MimeType = c.String()
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("Attachments", t => t.AttachmentId, cascadeDelete: true)
            .Index(t => t.AttachmentId);
        DropColumn("Attachments", "Path");
        DropColumn("Attachments", "Size");
        DropColumn("Attachments", "MimeType");
    }

    public override void Down()
    {
        AddColumn("Attachments", "MimeType", c => c.String());
        AddColumn("Attachments", "Size", c => c.Single(nullable: false));
        AddColumn("Attachments", "Path", c => c.String());
        DropIndex("AttachmentFiles", new[] { "AttachmentId" });
        DropForeignKey("AttachmentFiles", "AttachmentId", "Attachments");
        DropTable("AttachmentFiles");
    }
}


This looks acceptable. This migration will create the new table and remove the desired columns from the Attachments table. But what about our attachment data in the Attachments table? There is no way for Entity Framework to know that we want to do anything with the data in the columns we're deleting from the Attachments table. So, we do it manually. The DbMigration class has a Sql() method we can use to execute raw SQL against our database. If we modify our migration:

public partial class AddAttachmentFiles : DbMigration
{
    public override void Up()
    {
        CreateTable(
            "AttachmentFiles",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    CreatedOn = c.DateTime(nullable: false),
                    Path = c.String(),
                    Size = c.Single(nullable: false),
                    MimeType = c.String()
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("Attachments", t => t.AttachmentId, cascadeDelete: true)
            .Index(t => t.AttachmentId);

        Sql("");

        DropColumn("Attachments", "Path");
        DropColumn("Attachments", "Size");
        DropColumn("Attachments", "MimeType");
    }

    public override void Down()
    {
        AddColumn("Attachments", "MimeType", c => c.String());
        AddColumn("Attachments", "Size", c => c.Single(nullable: false));
        AddColumn("Attachments", "Path", c => c.String());

        Sql("");

        DropIndex("AttachmentFiles", new[] { "AttachmentId" });
        DropForeignKey("AttachmentFiles", "AttachmentId", "Attachments");
        DropTable("AttachmentFiles");
    }
}


we can write SQL queries that will be executed in the flow of commands. The first query will move data from the to-be-deleted columns in the Attachments table to the AttachmentFiles table:

INSERT INTO AttachmentFiles (CreatedOn, Path, Size, MimeType, AttachmentId)
SELECT CreatedOn, Path, Size, MimeType, Id AS AttachmentId FROM Attachments


This query selects only the relevant fields from the Attachments table and inserts them into the AttachmentFiles. Since there will be exactly one set of file data per Attachment record, there will be exactly one AttachmentFile per Attachment. This means that the "primary" AttachmentFile for each Attachment will by default be the previous contents of the Attachments table simply because it will be the only record.

The second query will move data from the AttachmentFiles table back into the re-created columns in the Attachments table:

UPDATE Attachments
SET Attachments.Path = af1.Path, Attachments.Size = af1.Size, 
    Attachments.MimeType = af1.MimeType
FROM Attachments
INNER JOIN AttachmentFiles af1 on af1.Id =
    (SELECT TOP 1 Id FROM AttachmentFiles af2
    WHERE af2.AttachmentId = Attachments.Id 
        ORDER BY af2.CreatedOn DESC)
This query is much more complicated, because we have many AttachmentFiles per Attachment, and we need to select out only one per Attachment. We do this by using a subquery that selects out the most recent AttachmentFile for a given AttachmentId.

Our final migration looks like this:

public partial class AddAttachmentFiles : DbMigration
{
    public override void Up()
    {
        CreateTable(
            "AttachmentFiles",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    CreatedOn = c.DateTime(nullable: false),
                    Path = c.String(),
                    Size = c.Single(nullable: false),
                    MimeType = c.String()
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("Attachments", t => t.AttachmentId, cascadeDelete: true)
            .Index(t => t.AttachmentId);

        Sql("INSERT INTO AttachmentFiles " +
            "(CreatedOn, Path, Size, MimeType, AttachmentId) " +
            "SELECT CreatedOn, Path, Size, MimeType, Id AS AttachmentId " +
            "FROM Attachments");

        DropColumn("Attachments", "Path");
        DropColumn("Attachments", "Size");
        DropColumn("Attachments", "MimeType");
    }

    public override void Down()
    {
        AddColumn("Attachments", "MimeType", c => c.String());
        AddColumn("Attachments", "Size", c => c.Single(nullable: false));
        AddColumn("Attachments", "Path", c => c.String());

        Sql("UPDATE Attachments " +
            "SET Attachments.Path = af1.Path, Attachments.Size = af1.Size, " +
                "Attachments.MimeType = af1.MimeType " +
            "FROM Attachments " +
            "INNER JOIN AttachmentFiles af1 on af1.Id = " +
                "(SELECT TOP 1 Id FROM AttachmentFiles af2 " +
                "WHERE af2.AttachmentId = Attachments.Id " +
                    "ORDER BY af2.CreatedOn DESC)");"

        DropIndex("AttachmentFiles", new[] { "AttachmentId" });
        DropForeignKey("AttachmentFiles", "AttachmentId", "Attachments");
        DropTable("AttachmentFiles");
    }
}


For more information, check out our website.

Tuesday, October 30, 2012

Functions as objects in Javascript

In Javascript, functions are objects.

That statement probably comes off a little underwhelming. Let's look at an example.

function addTheNumbers(num1, num2) {
    return num1 + num2;
}

function subtractTheNumbers(num1, num2) {
    return num1 - num2;
}

var func; 

func = addTheNumbers;
var result1 = func(1, 2); // 3

func = subtractTheNumbers;
var result2 = func(1, 2); // -1

We defined two different functions, one for adding and one for subtracting. Both take the same number of parameters. We then separately assign each of the functions to another variable, and then call that variable. You can point to any function just by using the name of that function as a variable.

For more information, check out our website.

Using a Windows Service as a Timer

BACKGROUND: Recently, I came across an issue with an MVC application I was developing. This application is composed of two sections: the Frontend and Backend. The Frontend contains all the visuals (what the user sees) and requests information from the Backend. The Backend fetches all the data from the database and sent it to the Frontend.

ISSUE: The communication between the Frontend and the Backend was being lost randomly. I was never able to witness it. We only knew it happened because the Frontend would display, but no data would be present.

RESOLUTION: I am going to create a windows service that is going to run on the machine hosting the Frontend project. This service will start a timer that will tick every x minutes (5 minutes for this application). On every tick, the Frontend will send a request to the Backend. If the request does NOT return "true" from the Backend, an error will be written to the log.

STEPS: *Using Visual Studio 2010 in .NET v.4

Step 1: Create a basic Windows Service Project and Setup Project
  • http://msdn.microsoft.com/en-us/library/aa984464(v=vs.71).aspx
  • You can ignore the OnContinue() action
  • Tip - For step 7 under the section 'To create the installers for your service', I chose to use Local System instead of Local Service. Local Service was NOT working for me. I would get a permissions error when I attempted to start the service and it would not start.
Step 2: Install the service, Start the service and Verify that it is running by checking the log in the Event Viewer

Step 3: Uninstall the service

Step 4: Add the Timer to the Windows Service
  • Add private System.Timers.Timer _timer = new System.Timers.Timer() to the top of the class
  • Call SetupTimer() from the OnStart function
private void SetupTimer(){
   int interval = 300000; //default to 5 minutes
   this._timer.Interval = interval;
   this._timer.Elapsed += new System.Timers.ElapsedEventHandler(TimerElapsed);
   this._timer.Start();
}
  • Create another function called TimerElapsed() in the same file
 
void TimerElapsed(object sender, System.Timers.ElapsedEventArgs e){
   eventLog1.WriteEntry("Communication check...");
   //Connect to a Backend function that returns true
   if (response)
     //Do nothing or write to log "success"
   else
     eventLog1.WriteEntry("Communication to Backend Failed",EventLogEntryType.Error);
     //EventLogEntryType.Error makes the entry get marked as an error in the Event Log
}

Step 5: Build the Windows Service Project

Step 6: Build the Setup Project
  • Be sure the build these projects in the proper order!
Step 7: Install the service, Start the service and Verify that it is running by checking the log in the Event Viewer


For more information, visit our site: www.rippe.com/index.htm

Friday, October 26, 2012

Editor Templates 101

Editor templates are a great way to reduce duplicated code in your project. When you're writing an HTML form, a lot of your fields can be simple text boxes, but sometimes we want a bit more functionality. Instead of writing the same code over and over to customize our editors, we put all of that code in one location and refer to it from inside our view.

I'd like to note that there are many different ways to use editor templates -- I'm just going to focus on a couple basic ways in this post.

Let's consider a view that contains a form:

@model MvcApplication.Models.Document

<h2>Create</h2>

@using (Html.BeginForm()) {
    @Html.ValidationSummary(true)
    <fieldset>
        <legend>Document</legend>

        <div class="editor-label">
            @Html.LabelFor(model => model.Name)
        </div>
        <div class="editor-field">
            @Html.EditorFor(model => model.Name)
            @Html.ValidationMessageFor(model => model.Name)
        </div>

        <div class="editor-label">
            @Html.LabelFor(model => model.Author)
        </div>
        <div class="editor-field">
            @Html.EditorFor(model => model.Author)
            @Html.ValidationMessageFor(model => model.Author)
        </div>

        <div class="editor-label">
            @Html.LabelFor(model => model.FileType)
        </div>
        <div class="editor-field">
            @Html.EditorFor(model => model.FileType)
            @Html.ValidationMessageFor(model => model.FileType)
        </div>

        <div class="editor-label">
            @Html.LabelFor(model => model.DateUploaded)
        </div>
        <div class="editor-field">
            @Html.EditorFor(model => model.DateUploaded)
            @Html.ValidationMessageFor(model => model.DateUploaded)
        </div>

        <p>
            <input type="submit" value="Create" />
        </p>
    </fieldset>
}

<div>
    @Html.ActionLink("Back to List", "Index")
</div>

This is a standard view made using the "Create" template. It has taken all of the fields of our Document model and has stubbed out editors for them. Here is the controller action that backs it up:

public ViewResult Create()
{
    return View();
}

Very simple. When the view is rendered to the browser, we see simple text boxes for each field.



But what if we want to do more than that? Let's say we want to be able to pick up our DateUploaded field with javascript so we can throw on a datepicker. Ok, how about this:

<div class="editor-label">
    @Html.LabelFor(model => model.DateUploaded)
</div>
<div class="editor-field">
    <input id="DateUploaded" name="DateUploaded" data-datepicker="true" value="@Model.DateUploaded" />
    @Html.ValidationMessageFor(model => model.DateUploaded)
</div>

Would this work? Yes, we can pick up on the data-datepicker tag with javascript. But this is beyond dirty. We have just lost a lot of the functional advantages we get by using the EditorFor method on the HTML helper. Sure, this would work... but what if you change the name of the DateUploaded field? If we used the EditorFor, we would see a big nasty error message when we tried to load the view. But using the raw HTML, it's still valid. The http POST won't even fail. We just won't have our DateUploaded field populated. So, how do we accomplish this?

Let's take a look at the overloads for EditorFor.

EditorFor<TModel, TValue>(this HtmlHelper<TModel> helper, Expression<Func<TModel, TValue>> expression)
EditorFor<TModel, TValue>(this HtmlHelper<TModel> helper, Expression<Func<TModel, TValue>> expression, object additionalViewData)
EditorFor<TModel, TValue>(this HtmlHelper<TModel> helper, Expression<Func<TModel, TValue>> expression, string templateName)
...

(There are more but I cut them out for brevity). Look at the third one in the list. What is templateName? Specifying a value for templateName is the most straightforward way to use editor templates. When you provide a template name, MVC will look in a few different locations for a view with a name matching the value you provide as templateName. It's very similar to how calling View() in a controller action looks for a view with a name matching the action name. The view engine checks these locations in order:
  1. ~/Areas/[AreaName]/Views/[ControllerName]/EditorTemplates/[TemplateName].cshtml
  2. ~/Areas/[AreaName]/Views/Shared/EditorTemplates/[TemplateName].cshtml
  3. ~/Views/[ControllerName]/EditorTemplates/[TemplateName].cshtml
  4. ~/Views/Shared/EditorTemplates/[TemplateName].cshtml
So, let's modify our view code to look for an editor template named "DatePicker":

<div class="editor-label">
    @Html.LabelFor(model => model.DateUploaded)
</div>
<div class="editor-field">
    @Html.EditorFor(model => model.DateUploaded, "DatePicker")
    @Html.ValidationMessageFor(model => model.DateUploaded)
</div>

Perfect! Now we need to add a "DatePicker" partial view. But what do we put there? It's important to note that using this EditorFor overload is very similar to calling PartialView. Whatever editor template it finds, it will render that in place like a partial view. So, everything we put inside the DatePicker editor template is exactly what we get rendered. Let's try something simple to test.

@model DateTime?
@Html.TextBox("txtDatePicker")

We will save that as ~/Views/Shared/EditorTemplates/DatePicker.cshtml. Notice what we have declared as our model type. DateTime makes sense, but why nullable? Well, if we load our "Create" view with a null model (as is the case with our Create action), then it doesn't have a value for DateUploaded. We have to account for a null value in this case, even if the type on the model is not nullable. 

Let's take a look at the HTML this generates:

<div class="editor-field">
    <input id="DateUploaded_txtDatePicker" name="DateUploaded.txtDatePicker" type="text" value="">
    <span class="field-validation-valid" data-valmsg-for="DateUploaded" data-valmsg-replace="true"></span>
</div>

Notice how it changed the name and id attributes? When you provide a name for an element inside an editor template, what you're really doing is supplying the name of the property you're using inside the editor template. Thus, you get DateUploaded.txtDatePicker. If DateUploaded was an object that had its own properties, this would be perfect. But in our case we just want a textbox that refers to itself. We can accomplish that by passing in an empty string for the name.

@model DateTime?
@Html.TextBox("")

This may seem strange at first, but the truth is we don't actually need to supply a name for our textbox. That name is picked up from the name of the property specified on our Create view.

<div class="editor-field">
    <input data-val="true" data-val-required="The DateUploaded field is required." id="DateUploaded" name="DateUploaded" type="text" value="">
    <span class="field-validation-valid" data-valmsg-for="DateUploaded" data-valmsg-replace="true"></span>
</div>

This looks much better, and we've even picked up on the validation attributes. One thing we're not picking up on is the value of the field. You can't tell in the above HTML because we're using a null model, but even if we had a value, it would not get rendered because we're just spitting out a blank textbox right now. Let's fill in the value parameter by using the ViewData object:

@model DateTime?
@Html.TextBox("", ViewData.TemplateInfo.FormattedModelValue)

By this point, we now have a correctly functioning textbox that will be populated with a value from the model, and will submit the correctly named field. So it functions exactly as it did before we started with this editor template nonsense, minus a few CSS classes. Given this, is there even a point to making a custom editor template? If you stop now, then no. But let's tweak our editor template code just a bit further:

@model DateTime?
@Html.TextBox("", string.Format("{0:MM/dd/yyyy}", ViewData.TemplateInfo.FormattedModelValue), new { data_datepicker = true })

Now we've got something. Instead of just using the FormattedModelValue, we're calling string.Format to show the date portion of the DateTime. Since we don't care about the time for DateUploaded, we don't need to see it. We also have included an anonymous object, setting data_datepicker to true. For this overload of the TextBox method, the third parameter is an object that will be used to set attributes on the HTML element that gets generated. Here is the HTML output:

<div class="editor-field">
    <input data-datepicker="True" data-val="true" data-val-required="The DateUploaded field is required." id="DateUploaded" name="DateUploaded" type="text" value="">
    <span class="field-validation-valid" data-valmsg-for="DateUploaded" data-valmsg-replace="true"></span>
</div>

Notice the addition of the "data-datepicker" field. (aside: the view engine has converted our underscore to a hypen -- this is because hypens aren't allowed in identifiers in C#.) We can pick up on this in javascript and assign datepickers to our field:


$(function() {
    $("[data-datepicker]").datepicker();
});


As simple as that. If we add that javascript to our _Layout.cshtml file, it will show up on every single page. With our DatePicker editor template, we can now instantly attach a jQuery UI datepicker to an editor, simply by setting the template name parameter to "DatePicker".

For more information, check out our website.

Friday, October 19, 2012

Enhancing Your Entity Repositories

Using a repository pattern for data access gives you clear separation and abstraction in your data layer.  The basic idea of adding repositories for data access is illustrated nicely on MSDN:



In the case of an MVC application, your controllers fit into the "Client Business Logic" area, and your Database resides in the "Data Source" area.  What's left is the repository itself, here is a typical example generated using the MVC Scaffolding project.


namespace DataModel.Models
{
    public class PartyRepository : IPartyRepository
    {
        public IQueryable<Party> All
        {
            get { return _context.Parties; }
        }
        public IQueryable<Party> AllIncluding(params Expression<Func<Party, object>>[] includeProperties)
        {
            IQueryable<Party> query = _context.Parties;
            foreach (var includeProperty in includeProperties) {
                query = query.Include(includeProperty);
            }
            return query;
        }
        public Party Find(int id){ return _context.Parties.Find(id);}
       
  //Methods removed for brevity
    }
    public interface IPartyRepository : IDisposable
    {
        IQueryable<Party> All { get; }
        IQueryable<Party> AllIncluding(params Expression<Func<Party, object>>[] includeProperties);
        Party Find(int id);
        //Methods removed for brevity
    }
}



The repository itself abstracts the dirty work of dealing with the context, and provides a great element of re-usability in your application (you can inject a repository anywhere you like and it will use the same context code).  Here are some sample use cases for some of the above methods:


_repository.Find(id); //simple lookup
_repository.All; //get everything
_repository.AllIncluding(model => model.Property1, model => model.Property2); //Get all and include some navigation properties
_repository.AllIncluding(<insert ALL navigation properties>); //Full eager fetch



Simple enough, but this structure raises some concerns:

  1. Specifying the navigation properties in your controller actions when using the repositories feels like a violation of your abstractions.  Dealing with relationships between data driven objects should stick to the data layer as much as possible.
  2. There really isn't a good method to do a full eager fetch in this scenario.  You could provide all of the include properties but that creates a maintainability issue when you add a new navigation property and have to change every location you used the including options.

Our solution to this issue was a small and simple re-working of these methods, here is an example from another object:

namespace DataModel.Models
{
    public class DocumentRepository : IDocumentRepository
    {
        private readonly Expression<Func<Document, object>>[] _allIncludes =
            {
                d => d.Department,
                d => d.Organization,
                d => d.DocumentStatus,
                d => d.DocumentType,
                d => d.FavoriteUsers
            };
        public IQueryable<Document> All(params Expression<Func<Document, object>>[] includeProperties)
        {
            IQueryable<Document> query = _context.Documents;
            foreach (var includeProperty in includeProperties)
            {
                query = query.Include(includeProperty);
            }
            return query.Where(doc => doc.Organization.AccountId == accountId);
        }
        public IQueryable<Document> All(bool eager, params Expression<Func<Document, object>>[] includeProperties)
        {
            var includes = eager ? _allIncludes : includeProperties;
            return All(accountId, includes);
        }
        public Document Find(int id, params Expression<Func<Document, object>>[] includeProperties)
        {
            IQueryable<Document> query = _context.Documents;
            foreach (var includeProperty in includeProperties)
            {
                query = query.Include(includeProperty);
            }
            return query.SingleOrDefault(doc => doc.Id == id);
        }
        public Document Find(int id, bool eager, params Expression<Func<Document, object>>[] includeProperties)
        {
            var includes = eager ? _allIncludes : includeProperties;
            return Find(id, includes);
        }
  //Methods removed for brevity
    }
    public interface IDocumentRepository
    {
        IQueryable<Document> All(params Expression<Func<Document, object>>[] includeProperties);
        IQueryable<Document> All(bool eager = false, params Expression<Func<Document, object>>[] includeProperties);
        Document Find(int id, params Expression<Func<Document, object>>[] includeProperties);
        Document Find(int id, bool eager = false, params Expression<Func<Document, object>>[] includeProperties);
        //Methods removed for brevity
    }
}

We simply added an eager option and an overload for the Find and the All(converting it from property to method).  Here are the new use cases:

_repository.Find(id); //simple lookup
_repository.All(); //get everything - lazy loaded
_repository.All(model => model.Property1, model => model.Property2); //Get all and include some navigation properties
_repository.All(true); or the more readable: _repository.All(eager:true); //Full eager fetch

With this modification, controlling the type of fetch you want to do is much more clear, and if we add navigation properties to the model, we only need to update the allIncludes property in the repository, not everywhere the repository is used for eager fetching.  We also still preserved the ability to lazy load, as well as specify exactly the properties you want during a fetch.

A side effect though, is we have some strange edge cases that result, for example:

_repository.All(eager:true, model => model.Property1);

In this case, the provided property is ignored and all properties are fetched.  We chose to lay the blame for this sort of issue on the developer as there are easier ways to use the methods to achieve the desired result, whatever that may be.


All code examples taken from our next version of Contract Guardian.

For more information, check out our Web Site.


Tuesday, September 18, 2012

Common 2012 Fall Conference & Expo



This three-day Power Systems educational and networking event will be packed with over 100 educational sessions on a large variety of topics, including vendor-led sessions, and pre-conference workshops. As expected from all COMMON conferences, attendees will have the opportunity to further enhance their education through numerous networking events and meeting with leading solution providers in the tabletop-style expo.
Visit the COMMON Web site for more information.

Best of all: Two of my favorite vendors will be exhibiting at the Conference.  LANSA & VAULT400

Thursday, March 8, 2012

Microsoft Visual Studio 2010

Scott Guthrie's Blog on Microsoft ASP.NET technologies including the new MVC4 update is excellent.  You may want to take a look at this blog because it offers so much information on Microsoft Development technologies.  Scott lives in Seattle and builds products for Microsoft.  See Scott take you through the ASP.NET MVC4 Beta.  Just click here to view the video. It contains a lot of excellent information.

In summary the presentation covers the new features in MVC4 that provide a rich set of enhancements to this tool set that is accessible from within Microsoft Visual Studio 2010.  MVC4 comes built-into VS2011 as well.  It does install side by side with MVC2 and MVC3 in the same Visual Studio installation on your development workstation.
  • Bundling/Minification Support - Improves performance on your website. 100% Automatic
  • Database Migrations - Allows production database schema to be automatically updated.
  • Web APIs - Great support in VS for creating Web APIs. Easily create HTTP Services.
  • Mobile Web -  Improved developer support for developing Mobile applications for the phone and tablet.
  • Real Time Communications - Allows for Client to Server Persistent connections over HTTP using SignalR.
  • Asynchronous Support - Reduces the number of threads and server resources; Increases scaleability.
If you need a web application developed for your business, just contact us by calling 513-977-4544 or click here to learn more.

Tuesday, February 21, 2012

LANSA 2012 User Conference



As many look forward to the release of LANSA 13.0 it is sometimes rewarding to reflect on the history of LANSA.  The early years were filled with concepts like:


  • Data Repository
  • 4th Generation Language
  • Multilingual
  • Templates
  • Database Triggers
You could recap all of the commands/parameters on a simple quick reference card.
Marketing was generally limited to print media.  (Remember printed trade rags?)
Presentations were given with slide projectors.
You actually had to fly somewhere to present.
Yes LANSA and the industry have come a long way.

Fast forward and we look forward to the conference that will feature:
  • Cloud Based Labs (Bring your laptop)
  • Web & Mobile App development
  • Business Objects
  • Some of the famous and familiar speakers in the LANSA community:
    • Diane Joester
    • Mark Duignan
    • Don Nelson
    • Madan Divaker
    • David Brault
    • and many others
  • Opportunity to network
If you have not registered or are just interested.  Click Here.