C# EventHandler – Is It Designed the Wrong Way?

\clrcevent handlingnet

State of the union:

C# Events/Eventhandlers are:

  • blocking
  • throwing Exceptions
  • sequential
  • deterministic executed in order
  • MulticastDelegates
  • a handler is always dependent on the behavior of the handlers registered earlier

Actually they are pretty close to regular function pointers. Despite beeing a sequence of them.

If any event-subscriber is doing evil things (blocking, throwing Exceptions):

  • the event invoker is blocked (in the worst case indefinitely)
  • the event invoker has to deal with unpredictable exceptions
  • the internal sequential eventhandler calling will break at the first exception
  • any EventHandler internally stored after the failing Eventhandler will not be executed

C# Example on .Net Fiddle

I always thought of c# events as an implementation of the publish-subcribe pattern.

But:

This contradicts my intuition of publishing/subcriber semantics.
Actually it seems to be the opposite.

If i publish a news/website/book/podcast/newsletter:

  • publishing is non-blocking (in relationship to the subscribers)
  • consuming is concurrent
  • reader/subscriber errors don't interfere with my publishing
  • reader/subscriber errors don't interfere with other readers/subscribers

Transfered to .Net this would mean: event.Invoke(…) leads to:

  • event.Invoke(…) is fire and forget
  • all subscriptions are dispatched to the thread-pool
  • and executed concurrent and independent of each other (not threadsafe though)
  • undeterministic order of execution
  • you might have to take care of threadsafety while accessing objects
  • one handler cannot "kill" the execution of other handlers

Other people seem to be confused too:

PS:
I'm aware that this might be highly subjective.
I guess there've been good reasons to do it this way.

Best Answer

There seem to be two main thrusts to your critique:

  • Events are fragile in the face of hostile or buggy subscribers.
  • The publish-subscribe metaphor doesn't match your intuition.

Let's deal with the second point first. All metaphors in design patterns are analogies, not isomorphisms. The criticism is valid, but remember, the design was not motivated by a desire to match the intuitions entailed by the metaphor! The design was motivated by a desire to meet the needs of line-of-business developers at a reasonable cost.

The first point is the more important one. Events are indeed fragile in the face of hostile or buggy subscribers. There are some interesting attacks.

Consider for example the .NET 1.0 security model, which was designed to allow code of different trust levels to be "on the stack" at the same time, but "luring" attacks -- where low-trust code calls high-trust code to do something hostile -- are mitigated by stack walks. That is, when high-trust code attempts a dangerous act, all the code on the call stack must be sufficiently trusted, not just the high-trust code itself.

Now, what happens when low-trust code adds a delegate to a high-trust method as a handler of an event? When the event is triggered, the low-trust code is no longer on the stack, so the stack walk does not see it!

There are a great many ways that events may be used by hostile code to damage the user. Events were specifically not designed to mitigate these vulnerabilities. You, the developer, are responsible for writing code that ensures that only high-trust code is allowed to add an event handler, and that the handler is benign.

Your question then is "is the design wrong?" Well, the answer is that a design is wrong when it fails to achieve its design goals. Designing a system where events were robust in the face of hostile event handlers was explicitly a non-goal of the design process, so the design is not in that sense "wrong".

It is certainly possible to develop a publish-subscribe system that has the properties you want: asynchrony, isolation, robustness and so on. The default event system was not designed to have those properties. Rather, it was designed to make a checkbox turn green when you click a button. It succeeds at that goal extremely well.

Moreover, it is by no means clear that your preferences are "better" universally; they may only be better in your specific use cases. Let's look at one of them, for argument's sake. Is it good that an exception thrown by one handler prevents the execution of the next handler, or bad? You imply that it is bad, but examine the premises more carefully. Under what circumstances can we expect that a handler raises an unhandled exception?

  • The handler is hostile and attempting a denial of service attack.

In this situation is the right thing to do to keep on trying to execute more handler code that also might be attacks? Attacks that might be depending on whatever broken state has been produced that caused the crash?

  • The handler is benign but buggy and has crashed by accident

The handler is so buggy that it crashed, possibly leaving its own internal state inconsistent, and possibly losing user data. Is the right thing to do to run a second handler that might also depend on that internal state, and lose more user data?

  • The handler is benign and not buggy, and its action exposes a crashing bug in the event source.

The event source's internal data structures are so corrupted that they are throwing unexpected exceptions during event handling. Is the right thing to do to run more code?


If something unexpected has happened and the world is now in a dangerously unstable, unpredictable state, running more random code is almost always the wrong thing to do. Sometimes isolating crashes just keeps a broken system producing harm alive longer to produce more harm! The right thing to do when an event handler throws an unexpected exception is to shut the entire system down, stopping further damage, logging the problem, and encouraging the development team to patch the buggy, crashing code.