Dynamic Programming
Dynamic programming is a method of solving a problem by saving partial results to avoid recomputation. This can allow a problem to be solved that was previously taking to long (runtime limit exceeded).
- You must discover a recursive solution
- You must find a way to store (cache) results to avoid redundant computation
- Dijkstra's algorithm
- Fibonacci number/sequence
- Maximum subarray problem
- Floyd–Warshall algorithm
- Prim's algorithm
- Flood Fill
A great place to start is with the Wikipedia Dynamic Programming article.
In my words, to solve a problem with dynamic programming, 2 conditions must be met:
Typically DP comes from a recursive solution (such as a depth first search) where the tree instead of spreading out indefinitely, it folds back on itself (some of the branches converge -- think chess for a minute: four different first two move sequences can produce the exact same resultant board).
So after you develop the recursive solution, you then look how to save partial results to avoid recomputation.
Typically, DP problems process some sort of tree, graph or matrix recursively. Once a value for a portion of the path is calculated it is saved so future visits to that point can simply use the value instead of recomputing it.
The term memoization is the process of saving intermediate results (caching). This is basically a magic word for caching the partial results.
The term sub-problem also shows up -- basically this is the idea that you can define a solution to a piece (sub) of the puzzle.
Another term that shows up is Optimal substructure. This means that the problem can be solved optimally by solving sub-problems optimally (recursion).
Summary: Develop recursive solution, figure out how to save partial results to avoid re-computation (prune the work).
Here are a few common algorithms that fall into the dynamic programming camp:
Here are some links that might interest you: