C# Design Patterns – Evaluating Best Practices

asp.net-mvccdesigndesign-patternsseparation-of-concerns

I have a colleague who has come up with a way of 'genericizing' information from a database so that all his web application's drop-down lists can share the same object in his MVC.NET C# code and Views, which can contain different data depending on what tables it is being used against.

We work for a government agency, and we have facilities divided up into areas called "Regions", which contain further subdivided areas called VISNs, which each contain "Sites" or "Facilities". My colleague has developed a very complex security scheme whereby people can be granted access to data based on their permission level which can be assigned by Region, VISN, Site, or any mixture of the three. It's a nice scheme and very clever. However, he has stored procedures that return lists of Regions, VISNs, and Sites based on a person's User Id, and he is returning "generic" values like TextFieldID, TextField, and TextParentID. My first problem with this, is that looking at this data coming out of the database, I would not know what the data is. I feel that fields coming from a query or stored procedure should be descriptive of the data they are delivering. What does everyone else think? The deeper issue for me however is that he is taking some of the data, concatenating it in his stored procedure like this

SELECT DISTINCT
    t.VisnID,
    NULL,
    t.StationID,
    'V' + CAST(t.VisnID as varchar) + ': ' + t.Station3N + ': ' + t.StationName,
    t.Inactive
FROM Stations t 

and sending it back in a "TextField" property, instead of sending back the discrete data separately (Station3N, StationName) and concatenating it in the View, which would allow for different concatenation depending on what device what accessing the application (perhaps mobile and desktop). His justification is that he can send all his various drop down data and capture them all, regardless of content, in the same C# object named "LookupValue."

public partial class LookupValue : IEquatable<LookupValue>
{
    public LookupValue(string textFieldId, string textField, bool inactive)
    {
        TextFieldID = textFieldId;
        TextField = textField;
        Inactive = inactive;
    }

    public LookupValue(string textParentId, string textFieldId, string textField, bool inactive)
    {
        TextParentID = textParentId;
        TextFieldID = textFieldId;
        TextField = textField;
        Inactive = inactive;
    }

    public LookupValue(string textParentId, string textSelfParentId, string textFieldId, string textField, bool inactive)
    {
        TextParentID = textParentId;
        TextSelfParentID = textSelfParentId;
        TextFieldID = textFieldId;
        TextField = textField;
        Inactive = inactive;
    }

    /// <summary>
    /// Returns a custom string identifier if the Inactive property is true.
    /// </summary>
    public string GetInactiveString()
    {
        string value = "";
        if (Inactive)
        {
            value = "[I]";
        }

        return value;
    }

    /// <summary>
    /// Returns a custom text label that concatenates the TextField property with the custom string identifier of the Inactive property.
    /// </summary>
    public string GetDisplayNameWithInactiveString
    {
        get { return TextField + " " + GetInactiveString(); }
    }

    public bool Equals(LookupValue other)
    {
        //Check whether the compared object is null.  
        if (Object.ReferenceEquals(other, null)) return false;

        //Check whether the compared object references the same data.  
        if (Object.ReferenceEquals(this, other)) return true;

        //Check whether the products' properties are equal.  
        return TextFieldID.Equals(other.TextFieldID) && TextField.Equals(other.TextField);
    }

    // If Equals() returns true for a pair of objects
    // then GetHashCode() must return the same value for these objects.
    public override int GetHashCode()
    {
        //Get hash code for the TextFieldID field if it is not null.  
        int hashTextFieldId = TextFieldID == null ? 0 : TextFieldID.GetHashCode();

        //Get hash code for the Code field.  
        int hashTextField = TextField.GetHashCode();

        //Calculate the hash code for the product.  
        return hashTextFieldId ^ hashTextField;
    }
}

He believes the re-usability of this object is worth the violation of Separation of Concerns and possible future difficulties in handling more than one display variation for a drop-down. I should point out that this object is contained in his Data project namespace (our projects are separated into Data, Web, Services, etc.) instead of the Web project namespace and he also returns this object to the Web layer via his Repository queries which call the stored procedures that I described earlier, which is gross violation of Separation of Concerns in my opinion.

I am just looking for some confirmation from other programmers that this is in fact a bad practice, and also looking for ideas I can present him on better ways to do what he is attempting. I have my own ideas, but again I'm just looking for other's ideas so I have more options to present to him.

–Edit — I forgot to mention that we are using Entity Framework as an ORM, and his Repository classes in his DAL are dumping the data from the stored procs into this LookupValue object.

Best Answer

I'm going to start flat out by saying after more than 30 years experience writing software and working in I.T in general, I have NEVER, EVER yet found a good reason for concatenating data coming out of a database.

You could be putting 90% of your application code in the DB infrastructure, and doing all sorts of magic things with it, and I still would not be persuaded that it's ever a good idea.

As has been hinted already, databases are designed for one specific task, and one specific task only, storing, searching, retrieving and general management of data.

The meta data, indexes, clusters and everything else a DBMS does well are there to ensure that it does its job with the best efficiency possible, and if you try to circumvent that, you're gonna land yourself in a whole heap of trouble.

Look at it this way - would you attempt to take your own C# code, and compile it by hand, to get better results than the compiler? Could you even get better results? I doubt it, in the same token creating your own data scheme is reckless and irresponsible IMHO.

That said, maybe your colleague has his reasons. There may, as others have said, be factors you're not privy to.

What he needs to consider is not only the ramifications of his en-devour on current affairs, but what it holds for the future.

Things like:

  • 1) Future maintenance. Will he be still there in 10 years, or even 1 year for that matter? If not, who's going to maintain it?

  • 2) If he does leave, and he leaves under bad circumstances, will someone get trained in how his system works? Or will upper management just assume that everyone who worked with him understands this code? (Take it from me, I've been in this position quite a few times, it's not funny when you don't have the first clue what you're looking at.)

  • 3) What happens if the HTML spec changes in such a way that his concatenation breaks future revisions (because he concatenates using some protected character sequence for example), or what if future spec changes break his concatenation? (This scenario has already happened more times than you might realize.)

  • 4) If your database server goes down in a ball of flames, which is easier to recover? A standards compliant database structure that everyone understands, or a hideously obfuscated 3 headed mess of stored procs and custom data schemes? (I know which I'd pick.)

I could sit here and give you a dozen more reasons easily, but I'm not going to.

Instead, I'm going to propose what just might be way out, that keeps you both happy.

He might be doing things this way because he doesn't know better.

You mention he's more of a DBA than a DEV, and that's fine, I've worked with many guys who are amazing DBA's but put a compiler in front of them and they'll curl up a ball and start to cry, and will happily admit their lack of knowledge.

Likewise, I've worked with DBA's who are so utterly stubborn and blind to their way of doing things for whatever reason, that they'll enforce that structure all the way down the line, often because they don't really understand what's happening beyond the DBA, but like many of us don't want to admit it (Human nature, is what it is...)

Whichever way you look at it, he's not going to budge, and you're not going to be happy until your "do it correctly, inner dev" is pacified.

What you need is a compromise.

In my mind, you can easily achieve this compromise using views.

See if you can get him to put all his logic into a set of views, then the code that needs this approach can query those, just as easily as it queries existing tables and/or stored procs.

This way, the actual table definitions, remain in a pristine condition with separate columns, so you can write standard, compliant ORM based code against them.

I've used this approach very successfully in the past for a not too dissimilar scenario, where I was tasked with creating an application to process some geographic data.

In my particular case, I needed to get some data back from a DB table, but this table had already been pre-processed in the back-end to return the data in a format suitable for another app. My solution here was to transform that data back to what was needed for my task, using a view.

Doing this, kept the original interface, and program code working, but it gave me the flexibility to change things for the better in the slice of the work I needed to do.

If you can't take this approach at the DB level, because for example he controls the DB and its access, then you may have to resort to other tricks.

If you're working in C# as you say you are, then it's not difficult to hook the code in different modules.

.NET code, whatever language it's written in, is compiled down to simple MSIL, and there are dozens of ways of injecting your own code in there to replace anything that's already running.

Robert Harvey above mentioned using a product like Automapper.

This is a quite reasonable suggestion, but one you'll want to give some thought to.

I use Automapper all the time, and it's good at the job it's designed for. That job is passing and matching DTO's between layers in your N-Tier design.

Automapper is typically used to flatten and combine multiple DTO's into a single level view model, typically for use in a presentation layer driven pattern such as MVC or MVVM.

In this particular case however, I don't quite see how Automapper might help you. AM is conventions based. This means that if you have, say, an EF object on one side that looks like this:

public class EFobject
{
  public string name { get; set; }
  public string email { get; set; }
  public string dept { get; set; }
}

and a regular POCO on the other side that looks like this:

public class RegularPoco
{
  public string name { get; set; }
  public string email { get; set; }
}

Automapper will ensure that the name and email fields get copied across, but will ignore the dept field.

If you were concatenating the strings at the code level, then you could easily use Automapper to produce the same result your colleague is generating, by specifying mapping rules that took those three columns, and pushed them into one column with the ':' symbol between each. Breaking them apart, however, AM is the wrong tool for the job.

Another option you have, is to handle this in your business layer.

If you're following good N-Tier architecture and design principles, then at an absolute minimum, you should have 'Presentation', 'Business Rules' and 'Data' layers.

Myself personally, I usually separate my DTO's and View models out into distinct layers too, but that's just to aid the way I work in Visual Studio.

With a business rules layer, nothing makes it to the presentation layer without first going through that layer, and at that point you have the entire .NET eco-sphere at your disposal to break that string apart again.

However, as you point out, that's no help when you don't know what the original data was, so you have one other possible approach to dealing with this.

If you can't break him of his habit, and he resists change, then use "Praise based change".

Praise him on his approach, and appear to be helpful, suggest "Improving" on his way of doing things by adding in data you can digest at the Business rules layer. For example, get him to change the output from something like:

1234:station1:fred

to

1234(stationid,int):station1(stationref,string):fred(stationname,string)

in his select driven obsession he can then simply pass things through a simple search and replace that removes anything in the brackets including said brackets, but... gives you all the information you need to de-serialize this thing back into a regular POCO.

Out of everything I've outlined here, I really wouldn't recommend going this route however, but sometimes you just have no choice but to go with the flow, and improvements for the greater good of your task are sometimes the only effective weapon you have.

In summary:

Standards and patterns & practices exist for a reason, and that reason extends way beyond initial development into the ALM Cycle until the product is eventually replaced.

I've come across so many DBA's (and devs for that matter) over the years who just can't see this bigger picture beyond finishing the list of tickets they have to work on, that it scares the living daylights out of me.

Team members should never be looking for ways to do things bigger & better than everyone else. Instead, you should always be looking for common ground, and common sense.

As one of the other posters above put it: simplicity is always the best option , never forget what KISS stands for, because one fine day when the S**t hits the fan, it might just be KISS that saved you from getting well and truly roasted.

Related Topic