There's certainly a noticeable trend towards functional programming, or at least certain aspects of it. Some of the popular languages that at some point adopted anonymous functions are C++ (C++11), PHP (PHP 5.3.0), C# (C# v2.0), Delphi (since 2009), Objective C (blocks) while Java 8 will bring support for lambdas to the language . And there are popular languages that are generally not considered functional but supported anonymous functions from the start, or at least early on, the shining example being JavaScript.
As with all trends, trying to look for a single event that sparked them is probably a waste of time, it's usually a combination of factors, most of which aren't quantifiable. Practical Common Lisp, published in 2005, may have played an important role in bringing new attention to Lisp as a practical language, as for quite some time Lisp was mostly a language you'd meet in an academic setting, or very specific niche markets. JavaScript's popularity may have also played an important role in bringing new attention to anonymous functions, as munificent explains in his answer.
Other than the adoption of functional concepts from multi-purpose languages, there's also a noticeable shift towards functional (or mostly functional) languages. Languages like Erlang (1986), Haskell (1990), OCaml (1996), Scala (2003), F# (2005), Clojure (2007), and even domain specific languages like R (1993) seem to have gained a strong following strongly after they were introduced. The general trend has brought new attention to older functional languages, like Scheme (1975), and obviously Common Lisp.
I think the single more important event is the adoption of functional programming in the industry. I have absolutely no idea why that didn't use to be the case, but it seems to me that at some point during the early and mid 90s functional programming started to find it's place in the industry, starting (perhaps) with Erlang's proliferation in telecommunications and Haskell's adoption in aerospace and hardware design.
Joel Spolsky has written a very interesting blog post, The Perils of JavaSchools, where he argues against the (then) trend of universities to favour Java over other, perhaps more difficult to learn languages. Although the blog post has little to do with functional programming, it identifies a key issue:
Therein lies the debate. Years of whinging by lazy CS undergrads like me, combined with complaints from industry about how few CS majors are graduating from American universities, have taken a toll, and in the last decade a large number of otherwise perfectly good schools have gone 100% Java. It's hip, the recruiters who use "grep" to evaluate resumes seem to like it, and, best of all, there's nothing hard enough about Java to really weed out the programmers without the part of the brain that does pointers or recursion, so the drop-out rates are lower, and the computer science departments have more students, and bigger budgets, and all is well.
I still remember how much I hated Lisp, when I first met her during my college years. It's definitely a harsh mistress, and it's not a language where you can be immediately productive (well, at least I couldn't). Compared to Lisp, Haskell (for example) is a lot friendlier, you can be productive without that much effort and without feeling like a complete idiot, and that might also be an important factor in the shift towards functional programming.
All in all, this is a good thing. Several multi-purpose languages are adopting concepts of paradigm that might have seemed arcane to most of their users before, and the gap between the main paradigms is narrowing.
Related questions:
Further reading:
It depends on what you're trying to accomplish.
Generally when you use OOP or any other architectural technique you want to leverage the benefits that it's approach facilitates. One of those benefits may be more reuse. Reuse is considered good because it can reduce the overall load of code that must be reviewed,code that must be written, etc.
The OOP architectural paradigm also offers information hiding and encapsulation, important for managing and reducing complexity. Reducing complexity in code is crucial to improving code quality and increasing overall development speed.
In my opinion, one of the best ways to refine your underlying OOP architecture is to begin to build a reference implementation. Try to actually implement the high level functionality, at least conceptually or in pseudo code. Doing this forces you to continue the architectural pattern you begin with the fundamental class definitions. In this way you can get feedback from your implementation attempts about what OOP patterns are working cleanly and clearly to express your higher level functionality and which are hindering those efforts by making things more difficult by enforcing unnecessary conformance or, conversely, allowing excessive, distracting and confusing abstraction.
Known (GOF...) design pattern can help you by providing expert heuristics about which things have worked for many people over many trials.
But ultimately it is your project's requirements which will have to guide the final OOP foundation construction.
The core concept for me is whether a particular architectural approach is going to have a NET benefit or a NET loss. If it takes longer to create a whole boatload of abstraction concepts and work out all the kinks in your inheritance hierarchy, that it would to just do a more quick and dirty approach and fix a few bugs or issues. Well, it's a judgement call.
Also, it depends on what you plan to do, if these classes are intended to be the foundation of another 100,000 lines of OOP class hierarchy definitions, it may be time well spent to thoughtfully refine the abstractions as tightly as possible.
Best Answer
First, let's try to establish a timeline:
Ritchie's main influences were BCPL and ALGOL (both are imperative languages), and C was created at a time when Simula's and Smalltalk's approach to object orientation wasn't yet well known. It was completed around 1972 and C with Classes appeared only 7 years later, with both Dennis Ritchie and Brian Kernighan being involved in its inception:
Objective C appeared 11 years later, and both it and C++ were major and successful efforts to bring object orientation to C. The gap might seem long now, but I don't think it was particularly long at the time, remember we're talking about an era before the World Wide Web. 1993, when Mosaic (the first browser) appeared was a turning point in the industry. Java and Delphi, released a couple of years later, had a huge advantage over their predecessors, at least in terms of popularity. The web was also one of the platforms Sun was targeting with their WORA promise, perhaps the more important one at the time, and Java was heavily marketed as the language for the then newly born platform.
Another key factor is that the late 1980's and early 1990's were a time when GUIs started becoming popular, especially in home computing, while at the same time hardware was getting cheaper and cheaper. Object orientation is an extremely convenient paradigm when developing GUIs and graphic oriented applications in general, and Turbo Pascal, Delphi, Visual Basic and (perhaps to a lesser extend) Java were lauded (at the time) for the simplicity they brought to GUI development.
Sun's aggressive marketing of Java obviously also played a role, however since I still vividly remember my first interaction with it, I was definitely not impressed. My first reaction to Java was "hm, nothing more than a resource hungry interpreted C++, I'll stick to Turbo Pascal, thank you very much" (hey, I was only 17 at the time ;). I don't know how anyone else reacted to Java at the time, but for me it was just a fad and I quickly moved on to Delphi (and Visual Basic, sigh), and only started using Java a few years later, in college, and only because it was a compulsory course.
While it's true that Java, and its flavour of object orientation, become popular extremely quickly, I really don't think the paradigm wasn't fairly popular before the mid 1990's, the introduction of the web changed our definition of popularity. In any case, the mid 1990's was a time when software development in general had a spurt of popularity, with the web, the proliferation of GUIs, and cheaper hardware being key factors. Java was simply at the right place at the right time.
Further reading:
Related questions: