This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
python3:lru_cache [2018/11/03 16:30] jguerin |
python3:lru_cache [2019/05/07 12:43] (current) jguerin |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== LRU Cache ====== | ====== LRU Cache ====== | ||
- | Memoization is a common optimization technique where repeatedly computed values are cached for quick lookup. While memoization can be achieved by a relatively simple modification to many recursive formulations, | + | Memoization is a common optimization technique where repeatedly computed values are cached for quick lookup. While memoization can be achieved by a relatively simple modification to many recursive formulations, |
===== The Fibonacci Sequence ===== | ===== The Fibonacci Sequence ===== | ||
- | The [[https:// | + | The [[https:// |
+ | |||
+ | The naive implementation is exponential. The second example using the LRU cache shows little growth up to moderately large values of //n//. | ||
<file python fibonacci.py> | <file python fibonacci.py> | ||
def fib(n): | def fib(n): | ||
Line 14: | Line 17: | ||
</ | </ | ||
+ | <file python fibonacci_lru.py> | ||
+ | from functools import * | ||
+ | |||
+ | @lru_cache(maxsize=None) | ||
+ | |||
+ | def fib(n): | ||
+ | if n < 1: | ||
+ | return n | ||
+ | return fib(n-1) + fib(n-2) | ||
+ | |||
+ | print(fib(300)) | ||
+ | </ | ||
+ | |||
+ | ==== Benchmarks ==== | ||
+ | The naive recursive implementation fails quickly as expected (at nearly a minute for //n=//40). | ||
+ | |||
+ | ^ %%n%% ^ fibonacci.py((Python 3.5.2)) | ||
+ | ^ 10 | .016s | .0224s | ||
+ | ^ 20 | .0216s | ||
+ | ^ 30 | .4664s | ||
+ | ^ 40 | - | .0248s | ||
+ | ^ 50 | - | .024s | | ||
+ | ^ ... | ... | .... | | ||
+ | ^ 100 | - | .0192s | ||
+ | ^ 200 | - | .024s | | ||
+ | ^ 300((300 was the chosen cutoff because at 400 we encounter a " | ||