Handling Disposables with Dependency Injection Patterns

dependency-injectionpatterns-and-practicesrepository-pattern

I'm struggling to implement disposable objects via dependency injection, as the dependencies are injected in the constructor (and live as long as their containing object does), whereas I want the disposable to only live in scope of a single method.

Brief summary

  • NInject is being used for dependency injection.
  • There are no unit/integration/regression tests whatsoever. While I am tasked with improving the code quality, I'm not able to change that lack of testing for this project.
  • I'm having issues selling the idea of a unit of work. Since we don't need transactional behavior, I think we can skip the unit of work pattern if we can live with not having transations.
  • Database layer: EF6 code first + repositories (FooRepository)
  • Business layer (FooManager)
  • API layer (FooController)
  • API will be consumed by a mobile application and a web application.
  • There is a possibility of a Windows Service being created later, which would directly depend on the Business layer.

Explanation

The repositories have been designed to be disposable (the DbContext lives from repository creation to repository dispose to enable things like bulk uploading of entities – which can't happen in a single call for reasons I won't go into here).

The idea is that the FooManager class (business logic) uses the repository with a using statement:

public class FooManager
{
    public IEnumerable<Foo> GetBarryFoos()
    {
        using(var repo = new FooRepository())
        {
            return repo.GetBarryFoos();
        }
    }
}

This works, but it doesn't play nice with the dependency injection which is not being put over the application:

public class FooManager
{
    IFooRepository _iFooRepository;

    public FooManager(IFooRepository iFooRepository)
    {
        _iFooRepository = iFooRepository
    }

    public IEnumerable<Foo> GetBarryFoos()
    {
        //no using statement possible?
    }
}

Why I think this is a problem

Currently, it isn't. The manager/repository/dbcontext objects are only kept alive in scope of a single request, which is fine. But it's a problem waiting to happen.

However, a recent project of this company ended up with a bug that caused several manweeks of work to even find the issue. What had happened is that after the application had grown to massive proportions, a decision was made to create a secondary Windows service which would automate some of the work.

In short, it was a ticketing application that was handled manually. But a subset of tickets could be synchronized between services (A opens its ticket => B opens a related ticket. B resolves its ticket => A resolves the related ticket). So they created a Windows service to handle this.

The issue was that a Windows service does not live in scope of a single request. And this caused ghost data to appear.

  • Step 1 => Automatically creating a ticket in B. The ticket in A is updated with a reference to B's ticket.
  • Step 2 => Days later, B resolves its ticket. The service then updates the A ticket to set it to resolved.

During step 2, which occurred in the same dbcontext that was days old, the update query entered all the old data (that was cached by the db context) and effectively deleted any changes that were made during the ticket in the meanwhile.

Now, I'm aware that this is a combination of circumstances. Blindly attaching entities to a context, not disposing of resources properly, spaghetti code that had no logging and no documented behavior, … Many thing contributed.

But now, I want to prevent that from happening. And I think a first good step here is to limit a context to a single business method. Once the method is returned, the context is disposed of. Regardless of whether it's a web request or part of a Windows service.

I think that if your Business layer requires you to use it in a web context, then you're creating a dependency that defeats the purpose of separating the business layer from the API layer.

What I have tried

1. Requesting an injected object

Pseudocode example:

public IEnumerable<Foo> GetBarryFoos()
{
    using(var repo = NInject.Get<IFooRespository>())
    {
        return repo.GetBarryFoos();
    }
}

The issue here is that the NInject project (a wrapper we made, which registers the dependencies) depends on the Business project (to register the FooManager dependency). Because of that, I can't have the Business project depend on the NInject wrapper, as that would create a circular reference.

2. Injecting a repository factory

Pseudocode example:

public class FooManager
{
    IFooRepositoryFactory _iFooRepositoryFactory;

    public FooManager(IFooRepositoryFactory iFooRepositoryFactory)
    {
        _iFooRepositoryFactory = iFooRepositoryFactory;
    }

    public IEnumerable<Foo> GetBarryFoos()
    {
        using(var repo = iFooRepositoryFactory.CreateNew())
        {
            return repo.GetBarryFoos();
        }
    }
}

This was the first thing I thought of. However, it sort of defeats the purpose of dependency injection.

The factory would have to call new FooRepository() to create a new instance. This does not use dependency injection. And I can't use dependency injection here, because it runs into the same issue as point 1 : it would create a circular dependency.

Also, if I were to somehow accept that the factory is dependency-injection-friendly, but the repository isn't, then I don't need to factor to begin with. Which brings me to the third option:

3. Remove dependency injection on the repository level.

If I remove the injection of repositories into managers (all other injections can stay), then I can have using statement exactly as I wanted to in the beginning:

public IEnumerable<Foo> GetBarryFoos()
{
    using(var repo = new FooRepository())
    {
        return repo.GetBarryFoos();
    }
}

But it's counterintuitive to remove dependency injection to improve the codebase. Plus, it would make any future testing (if it happens) impossible.

4. Cheaty hack – have repositories act as their own factories

In other words:

public class FooRepository : IFooRepository
{
    public IFooRepository CreateNew()
    {
        return new FooRepository();
    }
}

It's technically not violating a dependency, since I'm only calling new FooRepository() from inside FooRepository itself. However, this is really just a dirty hack to get everything to work. It would also mean that the injected IFooRepository is only really used for its type, not its particular object. The injected repository (and its context) is wasted.

It works, but it certainly can't be good practice.


And this is where I'm stuck…

To me, it seems like I have to choose between dependency injection and disposable repositories. And that's a bad choice either way.

I've seen issue arise from failing to dispose of repositories/contexts by the same team, in the same environment. So the reoccurrence of the same issue is a realistic expectation.

Is there a middle ground here which allows me to keep both dependency injection and disposables? If not, should I be prioritizing the disposables because they fix a real life issue, or dependency injection because it enables the possibility of future testing (which is not guaranteed to happen – I'd say chances are slim to none).

Best Answer

Factor does not in fact defeat the purpose. If implemented correctly. Think of the factory as part of the dependency injection. Actually, some advanced DI frameworks allow injecting Func<IFooRepository>, which works like normal factory.

If you want to implement it yourself, you would inject the container itself into the factory and use the container to resolve the dependency. This is perfectly fine if you think of the factory as part of the dependency injection, not part of your business code. In which case, factory referencing the container is perfectly fine.

If you want solution specific to NInject, then there is Ninject.Extension.Factory with either interface or func.