In principle, any multi-stage optimization problem is solvable using dynamic programming, if you can come up with the right notion of the state. You do not mention which class of problems you are working on, so it hard to give general guidelines. I'll make the following assumptions here:
- You are looking at deterministic optimization problems (as opposed to stochastic optimization problems).
- You have to choose finite number of decisions, and in the end want to maximize rewards that depend on all your decisions (the argument also works if you are interested in minimizing costs).
- You can decompose the total reward into sum of individual rewards, ideally, in such a way that decision n influences the total reward for the first time only from the n-th term onwards.
In order to solve such a problem by dynamic programming, you need to come up with a state staten that satisfies the following properties:
staten+1 = function(staten, decisionn)
and
rewardn = function(staten, decisionn)
If these conditions are satisfied, then dynamic programming gives an optimal solution. First you define value functions
Vn(staten) = maxdecisionn[ rewardn + Vn+1(staten+1)]
and recursively solve for these value functions in a backward manner.
NOTE: These are the simplest class of problems that can be solved using dynamic programming. For more details, see any standard book on dynamic programming, e.g., Berksekas.
It depends on the kind of search you want to perform. Each of the algorithms performs particularly well for certain types of a search, but you have not stated the context of your searches.
Here are some typical thoughts on search types:
Boyer-Moore: works by pre-analyzing the pattern and comparing from right-to-left. If a mismatch occurs, the initial analysis is used to determine how far the pattern can be shifted w.r.t. the text being searched. This works particularly well for long search patterns. In particular, it can be sub-linear, as you do not need to read every single character of your text.
Knuth-Morris-Pratt: also pre-analyzes the pattern, but tries to re-use whatever was already matched in the initial part of the pattern to avoid having to rematch that. This can work quite well, if your alphabet is small (f.ex. DNA bases), as you get a higher chance that your search patterns contain reuseable subpatterns.
Aho-Corasick: Needs a lot of preprocessing, but does so for a number of patterns. If you know you will be looking for the same search patterns over and over again, then this is much better than the other, because you need to analyse patterns only once, not once per search.
Hence, as usual in CS, there is no definite answer to the overall best. It is rather a matter of choosing the right tool for the job at hand.
Another note on your worst-case reasoning: Do consider the kinds of searches required to create that worst-case and think thoroughly about whether these are really relevant in your case. For example, the O(mn)
worst-case complexity of the Boyer-Moore algorithm stems from a search pattern and a text that each use only one character (like finding aaa
in aaaaaaaaaaaaaaaaaaaaa
) - do you really need to be fast for searches like that?
Best Answer
I assume you're asking about how to measure algorithm time complexity with regards to its input (how the algorithm time grows as N grows). One way is to model the algorithm in the form of a recurrence equation and then solve via a number of techniques. Common techniques are master theorem, substitution, recurrence trees, ...
The binary search algorithm can be seen as recurrences of dividing N in half with a comparison. So T(n) = T(n/2) + 1. Solve this by the master theorem to show the function is log n.
For a complete overview of this type of stuff I suggest working through these two classes:
http://aduni.org/courses/discrete and http://aduni.org/courses/algorithms