The idea behind doing things declaratively is that you are supposed to specify what, not how.
To me, it sounds like you're on the right track. The problem isn't that you are thinking about things the wrong way. It's that you're going too far. Let's look at what you're trying to do:
For example, suppose you have the
relations band(bandName, bandCountry),
venue(venueName, venueCountry),
plays(bandName, venueName), and I want
to write a query that says: all
venueNames such that for every
bandCountry there's a band from that
country that plays in venue of that
name.
So far, this is great. But then you do this:
In my mind I immediately go "for each
venueName iterate over all the
bandCountries and for each bandCountry
get the list of bands that come from
it. If none of them play in venueName,
go to next venueName. Else, at the end
of the bandCountries iteration add
venueName to the set of good
venueNames".
In essence, you're doing unnecessary work. You know what you want, which is all you really need. But then you go on and try to figure out how to get it.
If I were you, I'd try to get in the following habit:
- Define what you want.
- Consciously stop yourself from defining how to get it.
- Figure out how to represent what you want in SQL.
It might take some time and effort on your part, but once you really grok declarative programming, it becomes very useful. In fact, you might find yourself using declarative programming in the rest of your code.
If you're looking for a book, I'd recommend SQL and Relational Theory. It really helps you understand the theory behind SQL databases. Just remember to take Date's recommendations with a grain of salt. He gives very good information, but he can be a bit opinionated at times.
There are a number of good ways of looking at this. The easiest thing for me is to think about the relation between "Inductive" and "Coinductive definitions"
An inductive definition of a set goes like this.
The set "Nat" is defined as the smallest set such that "Zero" is in Nat, and if n is in Nat "Succ n" is in Nat.
Which corresponds to the following Ocaml
type nat = Zero | Succ of nat
One thing to note about this definition is that a number
omega = Succ(omega)
is NOT a member of this set. Why? Assume that it was, now consider the set N that has all the same elements as Nat except it does not have omega. Clearly Zero is in N, and if y is in N, Succ(y) is in N, but N is smaller than Nat which is a contradiction. So, omega is not in Nat.
Or, perhaps more useful for a computer scientist:
Given some set "a", the set "List of a" is defined as the smallest set such that "Nil" is in List of a, and that if xs is in List of a and x is in a "Cons x xs" is in List of a.
Which corresponds to something like
type 'a list = Nil | Cons of 'a * 'a list
The operative word here is "smallest". If we didn't say "smallest" we would not have any way of telling if the set Nat contained a banana!
Again,
zeros = Cons(Zero,zeros)
is not a valid definition for a list of nats, just like omega was not a valid Nat.
Defining data inductively like this allows us to define functions that work on it using recursion
let rec plus a b = match a with
| Zero -> b
| Succ(c) -> let r = plus c b in Succ(r)
we can then prove facts about this, like "plus a Zero = a" using induction (specifically, structural induction)
Our proof proceeds by structural induction on a.
For the base case let a be Zero. plus Zero Zero = match Zero with |Zero -> Zero | Succ(c) -> let r = plus c b in Succ(r)
so we know plus Zero Zero = Zero
.
Let a
be a nat. Assume the inductive hypothesis that plus a Zero = a
. We now show that plus (Succ(a)) Zero = Succ(a)
this is obvious since plus (Succ(a)) Zero = match a with |Zero -> Zero | Succ(a) -> let r = plus a Zero in Succ(r) = let r = a in Succ(r) = Succ(a)
Thus, by induction plus a Zero = a
for all a
in nat
We can of-course prove more interesting things, but this is the general idea.
So far we have dealt with inductively defined data which we got by letting it be the "smallest" set. So now we want to work with coinductivly defined codata which we get by letting it be the biggest set.
So
Let a be a set. The set "Stream of a" is defined as the largest set such that for each x in the stream of a, x consists of the ordered pair (head,tail) such that head is in a and tail is in Stream of a
In Haskell we would express this as
data Stream a = Stream a (Stream a) --"data" not "newtype"
Actually, in Haskell we use the built in lists normally, which can be an ordered pair or an empty list.
data [a] = [] | a:[a]
Banana is not a member of this type either, since it is not an ordered pair or the empty list. But, now we can say
ones = 1:ones
and this is a perfectly valid definition. Whats more, we can perform co-recursion on this co-data. Actually, it is possible for a function to be both co-recursive and recursive. While recursion was defined by the function having a domain consisting of data, co-recursion just means it has a co-domain (also called the range) that is co-data. Primitive recursion meant always "calling oneself" on smaller data until reaching some smallest data. Primitive co-recursion always "calls itself" on data greater than or equal to what you had before.
ones = 1:ones
is primitively co-recursive. While the function map
(kind of like "foreach" in imperative languages) is both primitively recursive (sort of) and primitively co-recursive.
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = (f x):map f xs
same goes for the function zipWith
which takes a function and a pair of lists and combines them together using that function.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith f (a:as) (b:bs) = (f a b):zipWith f as bs
zipWith _ _ _ = [] --base case
the classic example of functional languages is the Fibonacci sequence
fib 0 = 0
fib 1 = 1
fib n = (fib (n-1)) + (fib (n-2))
which is primitively recursive, but can be expressed more elegantly as an infinite list
fibs = 0:1:zipWith (+) fibs (tail fibs)
fib' n = fibs !! n --the !! is haskell syntax for index at
an interesting example of induction/coinduction is proving that these two definitions compute the same thing. This is left as an exercise for the reader.
Best Answer
Broadly speaking, declarative programming concerns itself with telling the computer what to do. Imperative programming concerns itself with telling the computer how to do it. Any non-trivial program will necessarily contain both.
Imperative programming is typically associated with control flow, loops and mutable state. Programs written in the procedural paradigm are typically imperative, though they don't necessarily have to be; you can write functions in a procedural language that operate in much the same way that they do in a functional language.
Imperative programming begins at the machine language level, where loops, jumps and state are common. From there, you can create higher abstractions by writing programming languages, starting from assembly language (which, for the most part, maps to processor instructions). You can progress to a procedural language like C, make it object-oriented (C with classes), add a lot more sophistication (C++), and still be entrenched in imperative programming for the most part.
Declarative programming is characterized more by declarations than instructions. It encompasses domain-specific languages like SQL, functional languages like Haskell, and declarative data structures like XML. Purely functional programming languages dispense with mutable state, preferring to store state in the spaces between function calls, rather than as private members of a class. You can write purely functional code in an imperative/OO language, as Linq demonstrates.
The reason you see recursion a lot in purely functional languages is because those language also dispense with loops. Without a loop (a construct that is grounded specifically in the how), one must resort to recursion to get loop-like behavior. As you have already discovered, many problems can be expressed naturally using recursion, once you understand it.
The important thing to remember about thinking declaratively is that it's just another level of abstraction. Should your students think about solving problems declaratively? Absolutely, but they must also be capable of telling the computer how to solve the problem, not just what.
It's no accident that the programming languages that are considered most practical and pragmatic by their practicioners are the ones that you can write code in both declarative and imperative styles.
Further Reading
Imperative Programming
Declarative Programming