diff --git a/pythonetc/README.md b/pythonetc/README.md
index 977ad04..b2eb723 100644
--- a/pythonetc/README.md
+++ b/pythonetc/README.md
@@ -70,7 +70,9 @@ More:
1. ./json-default.md
1. ./ipython.md
1. ./array.md
-1. ./re_compile.md
+1. ./re-compile.md
+1. ./lru-cache.md
+1. ./functools-cache.md
Out of order:
diff --git a/pythonetc/functools-cache.md b/pythonetc/functools-cache.md
new file mode 100644
index 0000000..63c1630
--- /dev/null
+++ b/pythonetc/functools-cache.md
@@ -0,0 +1,65 @@
+The decorator `functools.lru_cache` named so because of underlying cache replacement policy. When the cache size limit is reached [Least Recently Used](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_.28LRU.29) records removed first:
+
+```python
+from functools import lru_cache
+
+@lru_cache(maxsize=2)
+def say(phrase):
+ print(phrase)
+
+say('1')
+# 1
+
+say('2')
+# 2
+
+say('1')
+
+# push a record out of the cache
+say('3')
+# 3
+
+# '1' is still cached since it was used recently
+say('1')
+
+# but '2' was removed from cache
+say('2')
+# 2
+```
+
+To avoid the limit, you can pass `maxsize=None`:
+
+```python
+@lru_cache(maxsize=None)
+def fib(n):
+ if n <= 2:
+ return 1
+ return fib(n-1) + fib(n-2)
+
+fib(30)
+# 832040
+
+fib.cache_info()
+# CacheInfo(hits=27, misses=30, maxsize=None, currsize=30)
+```
+
+Python 3.9 introduced `functools.cache` which is the same as `lru_cache(maxsize=None)` but a little bit faster because it doesn't have all that LRU-related logix inside:
+
+```python
+from functools import cache
+
+@cache
+def fib_cache(n):
+ if n <= 2:
+ return 1
+ return fib(n-1) + fib(n-2)
+
+fib_cache(30)
+# 832040
+
+%timeit fib(30)
+# 63 ns ± 0.574 ns per loop
+
+%timeit fib_cache(30)
+# 61.8 ns ± 0.409 ns per loop
+```
diff --git a/pythonetc/lru-cache.md b/pythonetc/lru-cache.md
new file mode 100644
index 0000000..3647772
--- /dev/null
+++ b/pythonetc/lru-cache.md
@@ -0,0 +1,74 @@
+Decorator [functools.lru_cache](https://docs.python.org/3/library/functools.html#functools.lru_cache) caches the function result based on the given arguments:
+
+```python
+from functools import lru_cache
+@lru_cache(maxsize=32)
+def say(phrase):
+ print(phrase)
+ return len(phrase)
+
+say('hello')
+# hello
+# 5
+
+say('pythonetc')
+# pythonetc
+# 9
+
+# the function is not called, the result is cached
+say('hello')
+# 5
+```
+
+The only limitation is that all arguments must be [hashable](https://t.me/pythonetc/157):
+
+```python
+say({})
+# TypeError: unhashable type: 'dict'
+```
+
+The decorator is useful for recursive algorithms and costly operations:
+
+```python
+@lru_cache(maxsize=32)
+def fib(n):
+ if n <= 2:
+ return 1
+ return fib(n-1) + fib(n-2)
+
+fib(30)
+# 832040
+```
+
+Also, the decorator provides a few helpful methods:
+
+```python
+fib.cache_info()
+# CacheInfo(hits=27, misses=30, maxsize=32, currsize=30)
+
+fib.cache_clear()
+fib.cache_info()
+# CacheInfo(hits=0, misses=0, maxsize=32, currsize=0)
+
+# Introduced in Python 3.9:
+fib.cache_parameters()
+# {'maxsize': None, 'typed': False}
+```
+
+And the last thing for today, you'll be surprised how fast `lru_cache` is:
+
+```python
+def nop():
+ return None
+
+@lru_cache(maxsize=1)
+def nop_cached():
+ return None
+
+%timeit nop()
+# 49 ns ± 0.348 ns per loop
+
+# cached faster!
+%timeit nop_cached()
+# 39.3 ns ± 0.118 ns per loop
+```