diff --git a/pythonetc/README.md b/pythonetc/README.md
index 5274f60..7e871d8 100644
--- a/pythonetc/README.md
+++ b/pythonetc/README.md
@@ -66,15 +66,16 @@ More:
1. ./index.md (19 November 2020, 18:00)
1. ./qualname.md (24 November 2020, 18:00)
1. ./digits.md (26 November 2020, 18:00)
-1. ./emoji.md (1 December 2020, 18:00)
-1. ./json-default.md
-1. ./ipython.md
-1. ./array.md
-1. ./re-compile.md
-1. ./lru-cache.md
-1. ./functools-cache.md
-1. ./tau.md
-1. ./str-append.md
+1. ./emoji.md (01 December 2020, 18:00)
+1. ./json-default.md (03 December 2020, 18:00)
+1. ./ipython.md (08 December 2020, 18:00)
+1. ./array.md (10 December 2020, 18:00)
+1. ./lru-cache.md (15 December 2020, 18:00)
+1. ./functools-cache.md (17 December 2020, 18:00)
+1. ./re-compile.md (22 December 2020, 18:00)
+1. ./tau.md (24 December 2020, 18:00)
+1. ./str-append.md (29 December 2020, 18:00)
+1. ./new-year.md (31 December 2020, 18:00)
1. ./str-concat.md
1. ./bytearray.md
diff --git a/pythonetc/array.md b/pythonetc/array.md
index 4f6c0db..b178216 100644
--- a/pythonetc/array.md
+++ b/pythonetc/array.md
@@ -1,22 +1,22 @@
-The module [array](https://t.me/pythonetc/124) is helpful if you want to be memory efficient or interoperate with C. However, working with array can be actually slower than with list:
+The module [array](https://t.me/pythonetc/124) is helpful if you want to be memory efficient or interoperate with C. However, working with array can be slower than with list:
```python
-In [1]: import random
-In [2]: import array
-In [3]: lst = [random.randint(0, 1000) for _ in range(100000)]
-In [4]: arr = array.array('i', lst)
+import random
+import array
+lst = [random.randint(0, 1000) for _ in range(100000)]
+arr = array.array('i', lst)
-In [5]: %timeit for i in lst: pass
-1.05 ms ± 1.61 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
+%timeit for i in lst: pass
+# 1.05 ms ± 1.61 µs per loop
-In [6]: %timeit for i in arr: pass
-2.63 ms ± 60.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
+%timeit for i in arr: pass
+# 2.63 ms ± 60.2 µs per loop
-In [7]: %timeit for i in range(len(lst)): lst[i]
-5.42 ms ± 7.56 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
+%timeit for i in range(len(lst)): lst[i]
+# 5.42 ms ± 7.56 µs per loop
-In [8]: %timeit for i in range(len(arr)): arr[i]
-7.8 ms ± 449 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
+%timeit for i in range(len(arr)): arr[i]
+# 7.8 ms ± 449 µs per loop
```
The reason is that because `int` in Python is a [boxed object](https://en.wikipedia.org/wiki/Object_type#Boxing), and wrapping raw integer value into Python `int` takes some time.
diff --git a/pythonetc/functools-cache.md b/pythonetc/functools-cache.md
index 63c1630..af6202f 100644
--- a/pythonetc/functools-cache.md
+++ b/pythonetc/functools-cache.md
@@ -1,4 +1,4 @@
-The decorator `functools.lru_cache` named so because of underlying cache replacement policy. When the cache size limit is reached [Least Recently Used](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_.28LRU.29) records removed first:
+The decorator `functools.lru_cache` named so because of the underlying cache replacement policy. When the cache size limit is reached [Least Recently Used](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_.28LRU.29) records removed first:
```python
from functools import lru_cache
@@ -43,7 +43,7 @@ fib.cache_info()
# CacheInfo(hits=27, misses=30, maxsize=None, currsize=30)
```
-Python 3.9 introduced `functools.cache` which is the same as `lru_cache(maxsize=None)` but a little bit faster because it doesn't have all that LRU-related logix inside:
+Python 3.9 introduced `functools.cache` which is the same as `lru_cache(maxsize=None)` but a little bit faster because it doesn't have all that LRU-related logic inside:
```python
from functools import cache
diff --git a/pythonetc/ipython.md b/pythonetc/ipython.md
index 132e948..489e23f 100644
--- a/pythonetc/ipython.md
+++ b/pythonetc/ipython.md
@@ -1,6 +1,6 @@
[IPython](https://ipython.org/) is an alternative interactive shell for Python. It has syntax highlighting, powerful introspection and autocomplete, searchable cross-session history, and much more. Run `%quickref` in IPython to get a quick reference on useful commands and shortcuts. Some of our favorite ones:
-+ `obj?` - print a short object info, including signature and docstring.
++ `obj?` - print short object info, including signature and docstring.
+ `obj??` - same as above but also shows the object source code if available.
+ `!cd my_project/` - execute a shell command.
+ `%timeit list(range(1000))` - run a statement many times and show the execution time statistics.
diff --git a/pythonetc/new-year.md b/pythonetc/new-year.md
new file mode 100644
index 0000000..718d477
--- /dev/null
+++ b/pythonetc/new-year.md
@@ -0,0 +1,25 @@
+```python
+from base64 import b64decode
+from random import choice
+
+CELLS = '~' * 12 + '¢•*@&.;,"'
+
+def tree(max_width):
+ yield '/⁂\\'.center(max_width)
+
+ for width in range(3, max_width - 1, 2):
+ row = '/'
+ for _ in range(width):
+ row += choice(CELLS)
+ row += '\\'
+ yield row.center(max_width)
+
+ yield "'| |'".center(max_width)
+ yield " | | ".center(max_width)
+ yield '-' * max_width
+ title = b'SGFwcHkgTmV3IFllYXIsIEBweXRob25ldGMh'
+ yield b64decode(title).decode().center(max_width)
+
+for row in tree(40):
+ print(row)
+```
diff --git a/pythonetc/str-append.md b/pythonetc/str-append.md
index a1d4ed0..7f5de8c 100644
--- a/pythonetc/str-append.md
+++ b/pythonetc/str-append.md
@@ -1,4 +1,4 @@
-What is the fastest way to build a string from many substrings in a loop? In other words, how to concatenate fast when we don't know in advance how much strings we have? There are many discussions about it, and the common advice is that strings are immutable, so it's better to use a list and then `str.join` it. Let's not trust anyone and just check it.
+What is the fastest way to build a string from many substrings in a loop? In other words, how to concatenate fast when we don't know in advance how many strings we have? There are many discussions about it, and the common advice is that strings are immutable, so it's better to use a list and then `str.join` it. Let's not trust anyone and just check it.
The straightforward solution:
@@ -39,7 +39,7 @@ A bit faster. What if we use list comprehensions instead?
Wow, this is 1.6x faster than what we had before. Can you make it faster?
-And there should be disclamer:
+And there should be a disclaimer:
1. Avoid [premature optimization](http://wiki.c2.com/?PrematureOptimization), value readability over performance when using a bit slower operation is tolerable.
diff --git a/pythonetc/tau.md b/pythonetc/tau.md
index 1cff56b..b2618ec 100644
--- a/pythonetc/tau.md
+++ b/pythonetc/tau.md
@@ -1,10 +1,10 @@
-Issue with beautiful number [#12345](https://bugs.python.org/issue12345) proposed to add the following constant into stdlib:
+The issue with a beautiful number [#12345](https://bugs.python.org/issue12345) proposed to add the following constant into stdlib:
```python
tau = 2*math.pi
```
-It was a contraversal proposal since apparently it's not hard to recreate this constant on your own which will be more explicit, since more people are familiar with π rather than τ. However, the proposal was accepted and tau landed in `math` module in Python 3.6 ([PEP-628](https://www.python.org/dev/peps/pep-0628/)):
+It was a controversial proposal since apparently it's not hard to recreate this constant on your own which will be more explicit, since more people are familiar with π rather than τ. However, the proposal was accepted and tau landed in `math` module in Python 3.6 ([PEP-628](https://www.python.org/dev/peps/pep-0628/)):
```python
import math