The basics of most procedural languages are pretty much the same.
They offer:
- Scalar data types: usually boolean, integers, floats and characters
- Compound data types: arrays (strings are special case) and structures
- Basic code constructs: arithmetic over scalars, array/structure access, assignments
- Simple control structures: if-then, if-then-else, while, for loops
- Packages of code blocks: functions, procedures with parameters
- Scopes: areas in which identifiers have specific meanings
If you understand this, you have a good grasp of 90% of the languages on the planet.
What makes these languages slightly more difficult to understand is the incredible variety of odd syntax that people use to say the same basic things. Some use terse notation involving odd punctuation (APL being an extreme). Some use lots of keywords (COBOL being an excellent representative). That doesn't matter much. What does matter is if the language is complete enough by itself to do complex tasks without causing you tear your hair out. (Try coding some serious string hacking in Window DOS shell script: it is Turing capable but really bad at everything).
More interesting procedural languages offer
- Nested or lexical scopes, namespaces
- Pointers allowing one entity to refer to another, with dynamic storage allocation
- Packaging of related code: packages, objects with methods, traits
- More sophisticated control: recursion, continuations, closures
- Specialized operators: string and array operations, math functions
While not technically a property of the langauge, but a property of the ecosystem in which such languages live, are the libraries that are easily accessible or provided with the language as part of the development tool. Having a wide range of library facilities simplifies/speeds writing applications simply because one doesn't have to reinvent what the libraries do. While Java and C# are widely thought to be good languages in and of themselves, what makes them truly useful are the huge libraries that come with them, and easily obtainable extension libraries.
The languages which are harder to understand are the non-procedural ones:
- Purely functional languages, with no assignments or side effects
- Logic languages, such as Prolog, in which symbolic computation and unification occur
- Pattern matching languages, in which you specify shapes that are matched to the problem, and often actions are triggered by a match
- Constraint languages, which let you specify relations and automatically solve equations
- Hardware description languages, in which everything executes in parallel
- Domain-specific languages, such as SQL, Colored Petri Nets, etc.
There are two major representational styles for languages:
- Text based, in which identifiers name entities and information flows are encoded implicitly in formulas that uses the identifiers to name the entities (Java, APL, ...)
- Graphical, in which entities are drawn as nodes, and relations between entities are drawn as explicit arcs between those nodes (UML, Simulink, LabView)
The graphical languages often allow textual sublanguages as annotations in nodes and on arcs. Odder graphical languages recursively allow graphs (with text :) in nodes and on arcs. Really odd graphical languages allow annotation graphs to point to graphs being annotated.
Most of these languages are based on a very small number of models of computation:
- The lambda calculus (basis for Lisp and all functional languages)
- Post systems (or string/tree/graph rewriting techniques)
- Turing machines (state modification and selection of new memory cells)
Given the focus by most of industry on procedural languages and complex control structures, you are well served if you learn one of the more interesting languages in this category well, especially if it includes some type of object-orientation.
I highly recommend learning Scheme, in particular from a really wonderful book:
Structure and Interpretation of Computer Programs. This describes all these basic concepts. If you know this stuff, other languages will seem pretty straightforward except for goofy syntax.
I have to answer, "All of the above." People argue about whether coding is an art, a craft, an engineering discipline, or a branch of mathematics, and I think it's fairest to say it's some of each. As such, the more techniques you bring to mastery of the language, the better. Here is a partial list:
Use the language all day, every day. Usually this means being full-time employed in the language.
Read all you can about the language. Especially, "best practices" and idioms.
Join a users group to talk with others about the language and what they do with it.
Work with other people's code! There is no faster way to learn what not to do in a language than to have to clean up after someone who did something awful.
Support the code you write - every bug becomes a tour of your worst decisions!
Study computer science and languages in general
Learn a very different language. A great compliment to C would be a functional language like Lisp. This will turn the way you think about your procedural language inside out.
Learn to use the frameworks and APIs available for that language.
Take the time to do your own experiments with the language. SICP is not applicable to C, but the attitude of learning a language by testing its limits is a very productive one.
Read the history of the language to learn why it was made the way it is.
Attend conferences to hear the language authors speak, or to hear what industry leaders are doing with the language.
Take a class in the language.
Teach the language to others (thanks to Bryan Oakley)
In summary, do everything you can think of. There is no way to know everything about most languages. Every learning technique you use brings an additional perspective to your understanding.
Best Answer
Even though terminology is far from standardized, a common way to is categorize major programming paradigms into
You seem to already know what procedural programming is like.
In functional languages functions are treated as first-class objects. In other words, you can pass a function as an argument to another function, or a function may return another function. Functional paradigm is based on lambda calculus, and examples of functional languages are LISP, Scheme, and Haskel. Interestingly, JavaScript also supports functional programming.
In logical programming you define predicates which describe relationships between entities, such as
president(Obama, USA)
orpresident(Medvedev, Russia)
. These predicates can get very complicated and involve variables, not just literal values. Once you have specified all your predicates, you can ask questions of your system, and get logically consistent answers.The big idea in logical programming is that instead of telling the computer how to calculate things, you tell it what things are. Example: PROLOG.
Object-oriented paradigm is in some ways an extension of procedural programming. In procedural programming you have your data, which can be primitive types, like integers and floats, compound types, like arrays or lists, and user-defined types, like structures. You also have your procedures, that operate on the data. In contrast, in OO you have objects, which include both data and procedures. This lets you have nice things like encapsulation, inheritance, and polymorphism. Examples: Smalltalk, C++, Java, C#.
Generic programming was first introduced in Ada in 1983, and became widespread after the introduction of templates in C++. This is the idea that you can write code without specifying actual data types that it operates on, and have the compiler figure it out. For example instead of writing
you would write
once, and have the compiler generate specific code for whatever
T
might be, whenswap()
is actually used in the code.Generic programming is supported to varying degrees by C++, Java, and C#.
It is important to note that many languages, such as C++, support multiple paradigms. It is also true that even when a language is said to support a particular paradigm, it may not support all the paradigm's features. Not to mention that there is a lot of disagreement as to which features are required for a particular paradigm.