Even though terminology is far from standardized, a common way to is categorize major programming paradigms into
- Procedural
- Functional
- Logical
- Object-Oriented
- Generic
You seem to already know what procedural programming is like.
In functional languages functions are treated as first-class objects. In other words, you can pass a function as an argument to another function, or a function may return another function. Functional paradigm is based on lambda calculus, and examples of functional languages are LISP, Scheme, and Haskel. Interestingly, JavaScript also supports functional programming.
In logical programming you define predicates which describe relationships between entities, such as president(Obama, USA)
or president(Medvedev, Russia)
. These predicates can get very complicated and involve variables, not just literal values. Once you have specified all your predicates, you can ask questions of your system, and get logically consistent answers.
The big idea in logical programming is that instead of telling the computer how to calculate things, you tell it what things are. Example: PROLOG.
Object-oriented paradigm is in some ways an extension of procedural programming. In procedural programming you have your data, which can be primitive types, like integers and floats, compound types, like arrays or lists, and user-defined types, like structures. You also have your procedures, that operate on the data. In contrast, in OO you have objects, which include both data and procedures. This lets you have nice things like encapsulation, inheritance, and polymorphism. Examples: Smalltalk, C++, Java, C#.
Generic programming was first introduced in Ada in 1983, and became widespread after the introduction of templates in C++. This is the idea that you can write code without specifying actual data types that it operates on, and have the compiler figure it out. For example instead of writing
void swap(int, int);
void swap(float, float);
....
you would write
void swap(T, T);
once, and have the compiler generate specific code for whatever T
might be, when swap()
is actually used in the code.
Generic programming is supported to varying degrees by C++, Java, and C#.
It is important to note that many languages, such as C++, support multiple paradigms. It is also true that even when a language is said to support a particular paradigm, it may not support all the paradigm's features. Not to mention that there is a lot of disagreement as to which features are required for a particular paradigm.
Let's take a look at Java. Java 8 can't have variables with inferred types. This means I frequently have to spell out the type, even if it is perfectly obvious to a human reader what the type is:
int x = 42; // yes I see it's an int, because it's a bloody integer literal!
// Why the hell do I have to spell the name twice?
SomeObjectFactory<OtherObject> obj = new SomeObjectFactory<>();
And sometimes it's just plain annoying to spell out the whole type.
// this code walks through all entries in an "(int, int) -> SomeObject" table
// represented as two nested maps
// Why are there more types than actual code?
for (Map.Entry<Integer, Map<Integer, SomeObject<SomeObject, T>>> row : table.entrySet()) {
Integer rowKey = entry.getKey();
Map<Integer, SomeObject<SomeObject, T>> rowValue = entry.getValue();
for (Map.Entry<Integer, SomeObject<SomeObject, T>> col : rowValue.entrySet()) {
Integer colKey = col.getKey();
SomeObject<SomeObject, T> colValue = col.getValue();
doSomethingWith<SomeObject<SomeObject, T>>(rowKey, colKey, colValue);
}
}
This verbose static typing gets in the way of me, the programmer. Most type annotations are repetitive line-filler, content-free regurgiations of what we already know. However, I do like static typing, as it can really help with discovering bugs, so using dynamic typing isn't always a good answer. Type inference is the best of both worlds: I can omit the irrelevant types, but still be sure that my program (type-)checks out.
While type inference is really useful for local variables, it should not be used for public APIs which have to be unambiguously documented. And sometimes the types really are critical for understanding what's going on in the code. In such cases, it would be foolish to rely on type inference alone.
There are many languages that support type inference. For example:
C++. The auto
keyword triggers type inference. Without it, spelling out the types for lambdas or for entries in containers would be hell.
C#. You can declare variables with var
, which triggers a limited form of type inference. It still manages most cases where you want type inference. In certain places you can leave out the type completely (e.g. in lambdas).
Haskell, and any language in the ML family. While the specific flavour of type inference used here is quite powerful, you still often see type annotations for functions, and for two reasons: The first is documentation, and the second is a check that type inference actually found the types you expected. If there is a discrepancy, there's likely some kind of bug.
And since this answer was originally written, type inference has become more popular. E.g. Java 10 has finally added C#-style inference. We're also seeing more type systems on top of dynamic languages, e.g. TypeScript for JavaScript, or mypy for Python, which make heavy use of type inference in order to keep the overhead of type annotations manageable.
Best Answer
Two features come to my mind:
Portability. In languages like C, where datatypes like
int
are platform specific, an alias likeDWORD
makes it easier to ensure you are really using a 32bit signed integer everywhere, when this is the requirement for your program, even when you port the program to a plattform whereint
is e.g. 16bit unsigned and thereforeDWORD
has to be an alias forsigned long
.Abstraction. In your program, you might use a lot of integer and floating point numbers for different purposes. By creating aliases like
SPEED
,HEIGHT
,TEMPERATURE
, it's relatively easy to change one of those e.g. fromfloat
todouble
and leave the others as they are.