C# Naming Conventions – Understanding the New Standards

access-modifierscmicrosoftnaming-standards

A couple months ago, Microsoft updated their C# Naming Conventions (https://docs.microsoft.com/en-us/dotnet/csharp/fundamentals/coding-style/coding-conventions). As the developers of C#, I consider Microsoft to be the standard for coding conventions.

enter image description here

They want private and internal fields to be named as _myField. So calling an internal field from another class would look like this:

internal class MyClass1
{
    internal int _myInt;
}

internal class MyMainClass
{
    private MyClass1 _myClass1 = new MyClass1();

    internal void DoStuff()
    {
        _myClass1._myInt = 5;
    }
}

_myClass1._myInt = 5; just doesn't feel right to me. Maybe its because I am used to doing it other ways.

Am I understanding this convention right? If so, what are the objective benefits to doing it this way opposed to using the common PascalCase for internal fields? Are there any disadvantages?


Related question from Microsoft's convention in 2008, which goes against this new standard:
C# – Why are prefixes on fields discouraged?

Best Answer

That's just your opinion, man.

Naming conventions are an inherently subjective convention. There is no technical reason for most naming conventions, other than which characters are allowed to be used in names. To that extent, it's really just a matter of what you prefer.

But then we get to team-based development, and we realize that it's quite annoying if we don't all use the same approach, yet have to share the code. This is why conventions start making their appearance.

I suspect Microsoft was thinking of internal fields as assembly-private fields, which they arguably are, and therefore logically concluding that the same naming convention would make sense. However, I agree with your question's claim that there's a difference between the two, because internal field usage syntax is indistinguishable from public field usage syntax, provided the consumer is located in the same assembly. Seeing a private naming convention there rubs me the wrong way.

The short answer here is that Microsoft is just one opinion in a room of many, many opinions. If you want to attach more weight to their opinion, that's perfectly fine, but there are plenty of others who don't and/or outright disagree.


Microsoft also contradicts itself. Its code generating tools in VS don't use an underscore prefix even for private fields. You can replicate this behavior:

  • Create a class and write a parameterless constructor for it.
  • Add a method parameter to the constructor (string test).
  • With your text cursor on test, press Alt+Enter
  • Choose "create and assign field 'test'".
  • What do you get?

enter image description here

No underscore. I rest my case.


This is just my opinion, man.

Personally, I don't even like underscore prefixes for private fields to begin with.

In case you've not heard of Hungarian notation:

Hungarian notation is an identifier naming convention in computer programming, in which the name of a variable or function indicates its intention or kind, and in some dialects its type. [..] As the Microsoft Windows division adopted the naming convention, they used the actual data type for naming, and this convention became widely spread through the Windows API; this is sometimes called Systems Hungarian notation.

This naming convention advocates for prepending certain characters to variable names to indicate their type. sFoo for strings, iFoo for integers, lFoo for longs, ...

To quote Douglas Adams, in the beginning Hungarian notation was created. This has made a lot of people very angry and has been widely regarded as a bad move. At the time of writing, Hungarian notation is no longer used.

The best explanation I could find as to why it's a bad solution for the problem it tries to solve can be found here. Some excerpts:

Hungarian notation only makes sense in languages without user-defined types. In a modern functional or OO-language, you would encode information about the "kind" of value into the datatype or class rather than into the variable name.

Hungarian notations just turns the programmer into a human type-checker, with is the kind of job that is typically better handled by software.

Hungarian notation was specifically invented in the sixties for use in BCPL, a pretty low-level language which didn't do any type checking at all. I dont think any language in general use today have this problem, but the notation lived on as a kind of cargo cult programming.

In any other language, hungarian is just ugly, redundant and fragile. It repeats information already known from the type system, and you should not repeat yourself.

I, and I think most developers today, agree with every point made here.

Back to underscore prefixes and why I don't like them. Just like how Hungarian notation denotes the type of a variable, which is pointless for all the reasons mentioned above, I see no reason why denoting the accessibility modifier using a prefix for the name makes any more sense than denoting its type.

The compiler already stops you from accessing something that the access modifier already said you shouldn't be able to access, so there's no added gain from making sure the developer explicitly acknowledges at every turn that he knows this is a private field.

Related Topic