I fail to see the need for dynamic programming. The following algorithm is O(n^2) in both time and space complexity (could be improved), but gets an exact answer--a significant improvement if you consider that dynamic programming is commonly used to solve problems with exponential complexity.
To start with, place every block on a graph that plots weight vs. height:
Next, create a directed graph based on the constraints of the problem (connections go from top-right to bottom left, by necessity):
Next, place all nodes with no incoming links into a queue (marked as green--Merry Christmas :) )
Simple breadth-first search (using your queue as the starting state) should allow you to find the longest path through the graph. The longest path represents the tallest tower possible. No worries about loops, as it is not possible given the constraints of the problem.
We can provide evidence of this algorithm's correctness by proving our initial step:
The base of the optimal tower will be represented by a node with no incoming links. Proof by contradiction: Assume the base node of an optimal tower has an incoming link. The incoming link implies that that node (and by necessity, the entire tower) could be placed on the node that is connected by the incoming node. This tower would be taller than the optimal tower--a contradiction, thus the original assumption must have been wrong.
Let me know if I missed a key point of the problem (or an edge case that is not solved by this method).
Start by considering the slices just placed on a row and you can pick from one of the two ends. In this case supposing it's your turn to chose it's clear that pizzaAmount(slices)
is
- If there's no pizza left result is 0
- If there's only one slice result is that slice
- If there are at least two slices then the result is:
(using Python syntax)
max(slices[0] + sum(slices[1:]) - pizzaAmount(slices[1:]),
slices[-1] + sum(slices[:-1]) - pizzaAmount(slices[:-1]))
in other words you should consider both alternatives and that after taking your slice you will get all the remaining pizza except the result of the recursive call (because your friend will use the same strategy).
You can implement this with DP (or memoizing) because the array is indeed fixed and you can just consider as parameters the first and last slice index.
To solve the original full problem you just need to try all slices as starting slice and chose the one that maximizes the result.
Best Answer
In principle, any multi-stage optimization problem is solvable using dynamic programming, if you can come up with the right notion of the state. You do not mention which class of problems you are working on, so it hard to give general guidelines. I'll make the following assumptions here:
In order to solve such a problem by dynamic programming, you need to come up with a state staten that satisfies the following properties:
and
If these conditions are satisfied, then dynamic programming gives an optimal solution. First you define value functions
and recursively solve for these value functions in a backward manner.
NOTE: These are the simplest class of problems that can be solved using dynamic programming. For more details, see any standard book on dynamic programming, e.g., Berksekas.