In the .NET ecosystem the existence of many assemblies is due to the belief that a cs/vb/fsproj file == assembly. Throw on top of that the practices MS has been pushing for years of having one cs/vb/fsproj for every 'layer' (I use that in quotes because they have pushed both layers and tiers at different times) and you end up with solutions that have dozens and even hundreds of project files, and ultimately assemblies, in them.
I'm a firm believer that Visual Studio, or any IDE, is not the place that you should be architecting the assembly output of your project. Call it Separation of Concerns if you will; I believe that writing code and assembling code are two different concerns. That leads me to the point where the projects/solutions that I see in my IDE should be organized and structured so as to best enable the development team to write code. My build script is the location where the architecture/development team(s) focus on how we are going to assemble the raw code into compiled artifacts and, ultimately, how we're going to deploy those to different physical locations. It's prudent to note that I absolutely do not use MSBuild and it's tight coupling to *proj file structures for my build scripts. MSBuild, and it's inherently tight coupling to the *proj files and their structures, doesn't allow us any flexibility in moving away from the problem of large project and assembly counts in our codebases. Instead I use other tools and feed the files needed directly to the compiler (csc.exe, vbc.exe, etc).
By taking this stand I'm able to have my development team focus on writing functionality without any thought to the assemblies that will be output. Someone, surely, will say "But that means that developers can put code into assemblies that it shouldn't be in." Just because the code would compile in the IDE doesn't mean that it would when running the build script...and ultimately, the build script is the one source of truth for how assemblies will be constructed. The thing is, to do this you'd probably have to alter the build script to pull that code into the unwanted location. A technique that I use to back this up is to create unit tests that both describe and verify the architecture and deployables on the project. If the classes in the UI layers should never reference those in the data access layer, then I write a test that enforces that. Those types of tests will need to be changed if we decide to change the deployables or architecture, but since they're also doubling as documentation on those topics we should be changing them to ensure that the documentation is up to date.
On the flip side of that argument is the fact that changing the assemblies and deployables becomes much easier and faster. Instead of having to move code and files from one *proj file to another, and incurring all the problems that many version control systems have with that task, all you have to do is rework the build script to source each assembly's content from the already existing physical locations. It's this capability that highlights the decoupling between how we structure our solution and *proj files and how we create our output assemblies. With this technique, we are able to adjust both the number and the contents of the assemblies that we create without ever having to adjust how the code is structured in the IDE. Not only do we have the flexibility to change our outputs regularly and easily, but when we do that the developers are not impacted by having to discover where files have moved to. The re-learning overhead of the changes is non-existent.
The one drawback that many people struggle with when looking at this type of solution is that you may no longer be able to just "Hit F5" to run the application. While I'd argue that you don't want to do that anyways, it has to be addressed. The solutions are available and they differ depending on the type of application that you're building. The one root similarity between them is that if you absolutely must step through the code to debug/test it, learn how to Attach To Process.
To summarize, use the IDE for developing code. Use a build scripting solution that is not reliant on the defined solution and *proj structures to design and create the deployables that the application needs. Keep those two tasks separate and you'll see a lot less friction due to deployment level changes.
I don’t think either of them represent classical MVC.
Model
The model is the data and the business logic. A model contains the state of the running application. It should report data and state. It should validate input. It should update data and state based on input.
The model should be independent of use. In iOS, this generally means it should derive from core Objective-C classes and use only core Objective-C objects. A good test for this would be to see if the model would work for other compilers like GCC Objective-C, or if it will work in other environments like OS X.
The model should be externally controllable. The model should be able to be driven by unit-tests or driven by a completely separate set of controllers.
View
A view is anything that is displayed to a user. In iOS, this generally means anything derived from UIView (UILabel, UIButton, …) Views are dumb: meaning that they know nothing of the model, business logic, or context in which they are created.
Controller
The controller binds model data to views so it can be displayed to users, provides context to views so they are shown correctly (disabled, highlighted, …), and handles user input to update the model. In iOS, this generally means anything derived from UIViewController. A controller may need to process model data in some small way to get it into a view. A controller may need to process user input in some small way to make it suitable for the model.
Data Store
The data store is the permanent repository for model data. This can be a database or a web service. Usually, data stores stand outside of MVC or are attached to the model.
My Answer
Now that I got that out of the way, my answer to your question: the MVC described above is the approach I use. This has implications, some of which are difficult to deal with.
A view must know nothing of the model. If you have a property in your view named price
or total
, then you’re doing it wrong.
A controller must not contain business logic. If you have a controller calculate sales tax, then you’re doing it wrong.
Addressing concerns in the comments
Warning: I'm not an Android developer and have limited understanding of how
Android works.
Like data stores, long running async processing sits outside of MVC. If you need a service that updates the model, then think of it as method of the model. If you have a service that transforms data to show it to a user, then think of it as a method of a controller.
Novel Input
Like all input, broadcast receivers, orientation changes and task switching are tasks that should be handled by a controller. I think of iOS’s application delegate as a controller for the whole application: handling input which effects the whole application (like system notifications and application lifecycle changes).
Business Logic and Model Data
I see the Model composed of two parts, model data and business logic. I've learned it’s best to keep them separate. In practice, this means I wrap model data with business logic or add business logic on as a category of the model data.
Who I Am
To put it bluntly, it doesn’t matter who I am. I don’t think I’ve had a good original thought in my whole life. I’ve based my work on understanding patterns that other developers have created and applying them as best I can to the work I need to get done.
Since you asked, I’ve never worked in academia. I’ve been a software developer for the past 14 years. I’ve worked on code at all levels from drivers and debuggers to shiny custom views and silly apps. I’ve worked on iOS for the past 2½ years and have written many apps which have been released in the app store. I use my understanding of patterns to get my work done on time and on budget.
Update More questions in comments.
Models
I try to keep my data models as clean as possible. Usually they are mostly properties, sometimes conforming to NSCopying
, NSCoding
, and/or a JSON mapping. More and more, I include Key-Value Validation. This can be part of the data model or in a business logic category.
- (BOOL)validateValue:(inout MyValue **)value error:(out NSError **)error
{
if (![self complexTestWithValue:*value]) {
if (error != NULL) {
// Report error to caller.
*error = [self errorInValidation];
}
return NO;
}
return YES;
}
Business Logic
In the past, I've written lots of code which looks like
[dataSource fetchObjectsWithInput:input completion:^(NSArray *objects, NSError *error) {
// Model objects are in objects
}];
or
[dataSource fetchObjectsWithInput:input completion:^(MyClassResult *result, NSError *error) {
// Model objects are in result.objects
}];
Recently, I've gotten into promises (I'm using PromiseKit).
[dataSource fetchObjectsWithInput:input].then(^(MyClassResult *result) {
// Model objects are in result.objects
});
This has been a good general purpose factory pattern for Objective C.
I tend to write business logic computations in categories.
@interface MyClass (BusinessLogic)
- (NSString *)textForValue; // A string to be used as label text.
- (MyResult *)computeOperationWithInput:(MyInput *)input error:(out NSError **error);
- (MyOtherClass *)otherObjectByTransformingValues;
@end
Best Answer
Yes, IMO that architecture is awesome, and I would recommend it (not that that means much, lol). It has worked for me in the past and present ... Although my experience was on the .net side of things (MVC / WCF / POCO / ENTITY|MSSQL) .. but form an architecture level, the layering is nearly identical.
great idea.. good luck