A grammar is a set of rules that define the syntax for a particular language.
When people are talking specifically about a parser (especially one generated with a parser generator like yacc, Byacc, ANTLR, etc.), they may do a bit more hair-splitting, and talk specifically about those syntactical rules that are encoded using the generator's rules, vs. those parts that are enforced separately by code attached to a rule. For example, in C when you define an array, the size you specify for the array must be strictly positive (not zero). The grammar rule might basically say something like:
typename var_name '[' unsigned_int ']'
...and then separately, there would be a bit of code to check that the unsigned_int was non-zero. In this case, it could make some sense to talk about the requirements of the syntax and the grammar separately from each other, with the two having slightly different requirements (that, enforced together, we presume fit the requirements of the language itself).
My understanding of the history of it is that it's based on two main points...
Firstly, the language authors preferred to make the syntax variable-centric rather than type-centric. That is, they wanted a programmer to look at the declaration and think "if I write the expression *func(arg)
, that'll result in an int
; if I write *arg[N]
I'll have a float" rather than "func
must be a pointer to a function taking this and returning that".
The C entry on Wikipedia claims that:
Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".
...citing p122 of K&R2 which, alas, I don't have to hand to find the extended quote for you.
Secondly it is actually really, really difficult to come up with a syntax for declaration that is consistent when you're dealing with arbitrary levels of indirection. Your example might work well for expressing the type you thought up off-the-bat there, but does it scale to a function taking a pointer to an array of those types, and returning some other hideous mess? (Maybe it does, but did you check? Can you prove it?).
Remember, part of C's success is due to the fact that compilers were written for many different platforms, and so it might have been better to ignore some degree of readability for the sake of making compilers easier to write.
Having said that, I'm not an expert in language grammar or compiler writing. But I know enough to know there's a lot to know ;)
Best Answer
Semantics ~ Meaning
Syntax ~ Symbolic representation
So two programs written in different languages could do the same thing (semantics) but the symbols used to write the program would be different (syntax).
A compiler will check your syntax for you (compile-time errors), and derive the semantics from the language rules (mapping the syntax to machine instructions say), but won't find all the semantic errors (run-time errors, e.g. calculating the wrong result because the code says add 1 instead of add 2).