You're thinking of memoization, which is related to but distinct from DP. Memoization is "top down", DP is "bottom up". DP generally has better space complexity (memory requirements). For example, for a function that computers the Fibonacci sequence, space complexity with memoization is O(n), vs O(1) with DP.
Both top down and bottom up approaches can be used with dynamic programming. DP is about how you approach solving a problem by breaking it into subproblems (see the term used, "optimal substructure"). Whether you use that information for a top-down (often memoized) solution or a bottom-up solution is a choice for the implementor (and which depends on how you're using your solver and solutions).
If you need to solve one problem, bottom-up is often most efficient. But if you need to solve many related problems (say trialing many different initial conditions) then you may want the memoized solution.
Fibonacci is a bad example because it relies on the linearity of `prev + current` to be O(1) memory complexity. A lot of divide-and-conquer schemes exploit symmetry and linearity of their internals but if the operation is nonlinear then it requires memoization/caching.
On that note, I really struggle to understand what's special about "bottom up" versus "top down" DP or what categorizes it as unique from any other algorithms.
Bottom-up solutions to DP problems allow you, if the structure permits, to throw away parts of the full table. This is more memory efficient. Or, you could build the table bottom-up which avoids the repeated verification step in memoized, top-down solutions because when you reach an entry that isn't filled, you know that its related subproblems have already been computed and cached.
Someone posted an Euler problem with finding the maximal sum along any path (binary choice at each step) down a pyramid. The bottom-up solution only ever generates a "cache" of row elements (starting the row count from 1). The 100th row has 100 elements. When you go up to the 99th row you only need 99 elements by the end of calculating it, and so on. This is more efficient in memory than the n*(n+1)/2 (approx.) elements you'd need doing a top-down recursive, memoized version. Though in that case it's not awful on modern computers, it is a consideration for complex and large problems.
The bottom-up version also lends itself to not needing to bring the entire pyramid into memory, only one row + the cache at a time so you only have |row| + |row| + 1 items in memory at any given time (plus some temp variables or index variables). Again, it can be very memory efficient. Technically, you don't even need the entire current row in memory, only one element of it and then read each new element as you step across the row which further reduces the memory overhead.
With regard to building a full table, I realize Fib is a poor example in general but it is illustrative and fits in a comment. Let's say you memoize the naive version, then you have to include this check either before the recursive calls or at the start of each call:
if n in FIB: # either return if at top of the function, or retrieve rather than make the recursive call
This is unneeded if you build the table bottom-up:
def fib_bottom_up(n):
FIB = {1: 0, 2: 1}
i = 3
while i <= n:
FIB[i] = FIB[i - 1] + FIB[i - 2]
return FIB
Bottom-up construction of the table skips those checks which can add up for larger table constructions, or if you're building up many different tables. Consider a more complex optimization problem where different initial parameters result in different tables being generated (but by the same method). This bottom-up approach will be more efficient than the top-down approach.
Of course, a benefit for top-down is when an input only needs some subset of the table. If the generated table will be sparse, but may not be easily constructed in a sparse manner bottom-up, then top-down can be better.
https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_...