Architecture – Logical Separation vs Physical Layers

Architecturec

Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly.

Benefits include:

  1. faster compilation time
  2. simpler deployment
  3. Faster startup time for your program
  4. Less assemblies to reference

Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000…then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements.

Developers are worried for a few reasons.

A. People like to work in their own environment so they dont step on eachothers toes.

B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc.

C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies.

My personal view:

I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common.

I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

Best Answer

In the .NET ecosystem the existence of many assemblies is due to the belief that a cs/vb/fsproj file == assembly. Throw on top of that the practices MS has been pushing for years of having one cs/vb/fsproj for every 'layer' (I use that in quotes because they have pushed both layers and tiers at different times) and you end up with solutions that have dozens and even hundreds of project files, and ultimately assemblies, in them.

I'm a firm believer that Visual Studio, or any IDE, is not the place that you should be architecting the assembly output of your project. Call it Separation of Concerns if you will; I believe that writing code and assembling code are two different concerns. That leads me to the point where the projects/solutions that I see in my IDE should be organized and structured so as to best enable the development team to write code. My build script is the location where the architecture/development team(s) focus on how we are going to assemble the raw code into compiled artifacts and, ultimately, how we're going to deploy those to different physical locations. It's prudent to note that I absolutely do not use MSBuild and it's tight coupling to *proj file structures for my build scripts. MSBuild, and it's inherently tight coupling to the *proj files and their structures, doesn't allow us any flexibility in moving away from the problem of large project and assembly counts in our codebases. Instead I use other tools and feed the files needed directly to the compiler (csc.exe, vbc.exe, etc).

By taking this stand I'm able to have my development team focus on writing functionality without any thought to the assemblies that will be output. Someone, surely, will say "But that means that developers can put code into assemblies that it shouldn't be in." Just because the code would compile in the IDE doesn't mean that it would when running the build script...and ultimately, the build script is the one source of truth for how assemblies will be constructed. The thing is, to do this you'd probably have to alter the build script to pull that code into the unwanted location. A technique that I use to back this up is to create unit tests that both describe and verify the architecture and deployables on the project. If the classes in the UI layers should never reference those in the data access layer, then I write a test that enforces that. Those types of tests will need to be changed if we decide to change the deployables or architecture, but since they're also doubling as documentation on those topics we should be changing them to ensure that the documentation is up to date.

On the flip side of that argument is the fact that changing the assemblies and deployables becomes much easier and faster. Instead of having to move code and files from one *proj file to another, and incurring all the problems that many version control systems have with that task, all you have to do is rework the build script to source each assembly's content from the already existing physical locations. It's this capability that highlights the decoupling between how we structure our solution and *proj files and how we create our output assemblies. With this technique, we are able to adjust both the number and the contents of the assemblies that we create without ever having to adjust how the code is structured in the IDE. Not only do we have the flexibility to change our outputs regularly and easily, but when we do that the developers are not impacted by having to discover where files have moved to. The re-learning overhead of the changes is non-existent.

The one drawback that many people struggle with when looking at this type of solution is that you may no longer be able to just "Hit F5" to run the application. While I'd argue that you don't want to do that anyways, it has to be addressed. The solutions are available and they differ depending on the type of application that you're building. The one root similarity between them is that if you absolutely must step through the code to debug/test it, learn how to Attach To Process.

To summarize, use the IDE for developing code. Use a build scripting solution that is not reliant on the defined solution and *proj structures to design and create the deployables that the application needs. Keep those two tasks separate and you'll see a lot less friction due to deployment level changes.

Related Topic