migrate go and python notes to a new address

    
      
diff --git a/notes-go/README.md b/notes-go/README.md
index 2f5036c..6c23ca7 100644
--- a/notes-go/README.md
+++ b/notes-go/README.md
@@ -1,3 +1,3 @@
 # Go
 
-Notes about golang: tools, libraries, best practice.
+All the posts from this section now have a new address: 
diff --git a/notes-go/gostring.md b/notes-go/gostring.md
index 3ed82cb..3454bd1 100644
--- a/notes-go/gostring.md
+++ b/notes-go/gostring.md
@@ -1,137 +1,3 @@
 # How to use GoString in Go
 
-The package [fmt](https://pkg.go.dev/fmt) defines [GoStringer](https://pkg.go.dev/fmt#GoStringer) interface, and I think it doesn't have recognition it deserves. According to the documentation:
-
-> GoStringer is implemented by any value that has a `GoString` method, which defines the Go syntax for that value. The `GoString` method is used to print values passed as an operand to a `%#v` format.
-
-That means that you can implement `GoString() string` method on any of your types, and it will be called when the object of this type is formatted using `%#v`. And the purpose of this is to return a representation of the object in Go-syntax.
-
-## Example
-
-The [errors.New](https://pkg.go.dev/errors#New) returns an `error`. But since `error` is just an interface, it, in fact, returns a private struct that satisfies the interface. And indeed, if we print the result with the `%#v` flag, we'll see this struct, including all private fields:
-
-```go
-func main() {
-    e := errors.New("oh no")
-    fmt.Printf("%#v", e)
-    // Output: &errors.errorString{s:"oh no"}
-}
-```
-
-And this is the relevant source code of the package:
-
-```go
-func New(text string) error {
-    return &errorString{text}
-}
-
-type errorString struct{ s string }
-
-func (e *errorString) Error() string {
-    return e.s
-}
-```
-
-We can make it better. Let's copy-paste this source code and add to the struct one more method:
-
-```go
-func (e *errorString) GoString() string {
-    return fmt.Sprintf(`errors.New(%#v)`, e.s)
-}
-```
-
-And if we print it now, we'll see a nice and clean output of our new method:
-
-```go
-func main() {
-    e := New("oh no")
-    fmt.Printf("%#v", e)
-    // Output: errors.New("oh no")
-}
-```
-
-## Why
-
-The idea isn't new. For instance, Python has a [repr](https://docs.python.org/3/library/functions.html#repr) function the output of which can be customized by adding [`__repr__`](https://docs.python.org/3/reference/datamodel.html#object.__repr__) method to a class. The only difference is that Python stdlib actively uses this method to make the output friendly. For example:
-
-```python
-import datetime
-d = datetime.date(1977, 12, 15)
-print(repr(d))
-# Output: datetime.date(1977, 12, 15)
-```
-
-It gives several benefits:
-
-+ It hides internal implementation from the user.
-+ It looks cleaner.
-+ It allows the user to copy-paste the output and (assuming all required imports are in place) get the object created. It can be helpful, for instance, to hardcode values for tests.
-
-## Go built-ins
-
-Built-in types give good enough output:
-
-```go
-m := map[string]int{"hello": 42}
-fmt.Printf("%#v", m)
-// map[string]int{"hello":42}
-
-b := []byte("hello")
-fmt.Printf("%#v", b)
-// []byte{0x68, 0x65, 0x6c, 0x6c, 0x6f}
-```
-
-And if you want a bit nicer output, have a look at [dd](https://github.com/Code-Hex/dd) package which is designed exactly for printing structs and built-in types in a nice Go syntax:
-
-```go
-fmt.Println(dd.Dump(m))
-// map[string]int{
-//   "hello": 42,
-// }
-```
-
-## Go stdlib
-
-The stdlib does use it in a few places but most of the time it doesn't. We've already seen the output of `errors.New`. Let's see some more examples.
-
-✅ `time.Date`:
-
-```go
-v := time.Unix(0, 0)
-fmt.Printf("%#v", v)
-// time.Date(1970, time.January, 1, 1, 0, 0, 0, time.Local)
-```
-
-❌ `fmt.Errorf`:
-
-```go
-e := errors.New("damn")
-v := fmt.Errorf("oh no: %w", e)
-fmt.Printf("%#v", v)
-// &fmt.wrapError{msg:"oh no: damn", err:(*errors.errorString)(0xc000010250)}
-```
-
-❌ `sync.WaitGroup`:
-
-```go
-v := sync.WaitGroup{}
-fmt.Printf("%#v", v)
-// sync.WaitGroup{noCopy:sync.noCopy{}, state1:0x0, state2:0x0}
-```
-
-❌ `list.New`:
-
-```go
-v := list.New()
-fmt.Printf("%#v", v)
-// &list.List{root:list.Element{
-// next:(*list.Element)(0xc00007e150),
-// prev:(*list.Element)(0xc00007e150),
-// list:(*list.List)(nil),
-// Value:interface {}(nil)},
-// len:0}
-```
-
-## PSA
-
-I want you to know that if stdlib doesn't do something, it doesn't mean you shouldn't. `GoString` doesn't worth bothering in your internal projects, but if you develop an open-source package, please, spend a few seconds of your life and make this representation of each of your types a little bit more useful.
+This blog post now has a new address: 
diff --git a/notes-go/json.md b/notes-go/json.md
index d2928af..7ceb5d8 100644
--- a/notes-go/json.md
+++ b/notes-go/json.md
@@ -1,71 +1,3 @@
 # Work with JSON from Go and command line
 
-## Go standard library
-
-In Go creating of the JSON quite simple, just pass any data in `json.Marshal`, and you get the bytes with JSON. But parsing of JSON is painful, especially when you want only one field that located deep in the source JSON. You have to create all maps and structs for all places where located fields that you need. See full example on [Go by Example](https://gobyexample.com/json).
-
-## Make it better
-
-+ [gjson](https://github.com/tidwall/gjson) -- allows you to get any data from JSON without difficult conversions and type-casting. For example, how you can get `lastName` from all objects in the `programmers` field: `gjson.Get(json, "programmers.#.lastName").Array()`. Beautiful!
-+ [sjson](https://github.com/tidwall/sjson) -- same thing for setting up values in the JSON data: `sjson.Set(json, "name.last", "Anderson")`.
-
-## Command line
-
-+ [jj](https://github.com/tidwall/jj) -- CLI around `gjson` and `sjson` for getting and editing values in JSON stream.
-+ [jd](https://github.com/tidwall/jd) -- same as `jj`, but interactive. Really useful tool for constructing paths for `gjson`.
-+ [jq](https://stedolan.github.io/jq/tutorial/) -- like `jj`, but with beautify, syntax highlighting, simple installation from repos and alternative syntax. This is really powerful tool with regexp, functions, many arguments, variables, Turing complete language.
-
-You can beautify output of any command that writes JSON logs to stdout just piping it to `jq`:
-
-```json
-$ go run tmp.go | jq
-{
-  "level": "info",
-  "message": "hello world!"
-}
-{
-  "level": "info",
-  "message": "hello again",
-  "testInt": 0
-}
-```
-
-However, `jq` fails if the input contains non-JSON lines. I don't know how to fix it. I've found [isuue](https://github.com/stedolan/jq/issues/682) where authors recommend to use `--seq` key for it, but it doesn't work for me. In this case, you can use [bat](https://github.com/sharkdp/bat) -- a clone of [cat](https://bit.ly/2NMm67N) with syntax highlighting, lines numbering, pagination and git support.
-
-```json
-go run tmp.go | bat -l jsonnet
-───────┬──────────────────────────────────────────────────────
-       │ STDIN
-───────┼──────────────────────────────────────────────────────
-   1   │ broke
-   2   │ {"level":"info","message":"hello world!"}
-   3   │ broke again
-   4   │ {"level":"info","message":"hello again","testInt":0}
-```
-
-## Appendix
-
-Code that I used for tests:
-
-```go
-package main
-
-import (
-	"fmt"
-	"os"
-	"time"
-
-	"github.com/francoispqt/onelog"
-)
-
-func main() {
-	fmt.Println("broke")
-	logger := onelog.New(os.Stdout, onelog.ALL)
-	logger.Info("hello world!")
-	fmt.Println("broke again")
-	for i := 0; true; i++ {
-		time.Sleep(2 * time.Second)
-		logger.InfoWith("hello again").Int("testInt", i).Write()
-	}
-}
-```
+This blog post now has a new address: 
diff --git a/notes-go/lint-n-format.md b/notes-go/lint-n-format.md
index 1bbde86..8ea1bce 100644
--- a/notes-go/lint-n-format.md
+++ b/notes-go/lint-n-format.md
@@ -1,79 +1,3 @@
 # Go linters and formatters
 
-## Linters
-
-The first linter you face coming into Go is [go vet](https://golang.org/cmd/vet/). It is a built-in linter targeted mostly at finding bugs rather than code style.
-
-Another official linter from the Go core team is [golint](https://github.com/golang/lint). This one is targeted at finding not bugs but mostly style issues from the official [code review guide](https://github.com/golang/go/wiki/CodeReviewComments) and [effective go](https://golang.org/doc/effective_go.html).
-
-If you want to catch more bugs and write more effective code, consider running more linters. Go has plenty of them. Of course, it would be hard to download, install, and run all of them. Luckily, we have amazing [golangci-lint](https://golangci-lint.run/). It is a wrapper, providing a single unified way to run and configure [more than 40 linters](https://golangci-lint.run/usage/linters/). Keep in mind that by default, it runs only a few of them, so it's good to have an explicit configuration with a listing of all linters you want to use in the project. And if you'd like to know from what you can start, below are some most notable linters.
-
-A few biggest linters:
-
-+ [staticcheck](https://staticcheck.io/docs/) has a huge collection of checks, target on finding bugs, improving code readability and performance, simplifying code.
-+ [go-critic](https://go-critic.github.io/) has also many different checks of all kinds: bugs, performance, style issues. It is positioned as "the most opinionated linter", so, probably, you want to disable a few checks but only a few. Don't ask what's better, staticcheck or go-critic, just use both.
-+ [gosec](https://github.com/securego/gosec) is targeted exclusively on finding security issues.
-
-A few more specific but helpful linters:
-
-+ [errcheck](https://github.com/kisielk/errcheck) finds errors that you forgot to check. [Always check all errors](https://github.com/golang/go/wiki/CodeReviewComments#handle-errors) and do something meaningful, don't let them pass unnoticed.
-+ [ineffassign](https://github.com/gordonklaus/ineffassign) finds an assignment that has no effect. In most cases, it happens when you assigned an error to a previously created `err` variable but forgot to check it.
-
-Useful linters without golangci-lint integration (yet?):
-
-+ [revive](https://github.com/mgechev/revive) is a stricter and faster alternative to golint with a lot of rules. Most of the rules are about code style and consistency, some are opinionated. No need to worry, the linter allows to configure or disable any check.
-+ [sqlvet](https://github.com/houqp/sqlvet) lints SQL queries in Go code for syntax errors and unsafe constructions.
-+ [semgrep-go](https://github.com/dgryski/semgrep-go) finds simple bugs.
-
-What should you use? Everything you can! If you have an existing project, enable all linters that are easy to integrate, and then slowly, one by one, try and enable all that look reasonable and helpful. Just give it a try! Also, be mindful of your coworkers, help them to fix a code that a linter complains about, and be ready to disable a check if it works not so well for your codebase, especially if it is only about a code style.
-
-Further reading:
-
-+ [awesome-go-linters](https://github.com/golangci/awesome-go-linters)
-+ [golangci-lint supported linters](https://golangci-lint.run/usage/linters/)
-
-## Formatters
-
-Your basic toolkit:
-
-+ One of the most famous go features is a built-in code formatter [gofmt](https://golang.org/cmd/gofmt/). Gofmt is your friend, use gofmt. There is no specification about what exactly gofmt does because the code evolves and changes rapidly, fixing formatting bugs and corner cases.
-+ [goimports](https://pkg.go.dev/golang.org/x/tools/cmd/goimports) is another must-have code formatter. It automatically adds missed imports and removes unused ones. I am so used to it that I don't remember when I last time added an import manually.
-+ [goreturns](https://github.com/sqs/goreturns) fills in `return` statement with zero values to match the function return type. It helps save a few keystrokes. However, be careful using it, the project seems to be not in active development for a long while.
-+ [gofumpt](https://github.com/mvdan/gofumpt) is a stricter fork `gofmt` with more rules. It is fully compatible with `gofmt` and helpful.
-
-For historical reasons, Go extension for VSCode support specifying only one code formatter at once. So, every next level tool calls all previous tools under the hood:
-
-+ `goimports` calls `gofmt`.
-+ `goreturns` calls `goimports`.
-+ `gofumpt` provides `gofumports` which is `goimports` calling `gofumpt` instead of `gofmt`.
-
-So, I use `gofumports` as my code formatter in VSCode, which includes `gofmt`, `gofumpt`, and `goimports`.
-
-A few smaller code formatters that can come in handy:
-
-+ [golines](https://github.com/segmentio/golines) formats long lines of code. Probably, you won't be happy with the result and would like to manually reformat it. However, it's still better than a piece of code hiding outside your screen boundaries. There is an issue "[consider breaking long lines](https://github.com/mvdan/gofumpt/issues/2)" in gofumpt, so there is a chance that soon gofumpt will take care of it as well.
-+ [keyify](https://github.com/dominikh/go-tools/tree/master/cmd/keyify) turns unkeyed struct literals (`T{1, 2, 3}`) into keyed ones (`T{A: 1, B: 2, C: 3}`). This description says everything. Always use keyed struct literals because the order is hard to remember, can be changed, and so on.
-+ [unconvert](https://github.com/mdempsky/unconvert) removes unnecessary type conversions. It's not so important but makes the code a bit cleaner.
-
-See [awesome-go-code-formatters](https://github.com/life4/awesome-go-code-formatters) for more tools.
-
-## Custom rules
-
-If you have a custom rule you'd like to validate or reformat in your project, there are a few linters and tools that can be helpful:
-
-+ [gomodguard](https://github.com/ryancurrah/gomodguard) allows forbidding usage of particular modules or domains.
-+ [ruleguard](https://github.com/quasilyte/go-ruleguard) is not a linter at the moment but a framework for fast writing of simple rules. It has a [custom DSL](https://github.com/quasilyte/go-ruleguard/blob/master/docs/gorules.md) that can be used to lint code as well as rewrite specific constructions. It [can be integrated](https://quasilyte.dev/blog/post/ruleguard/#using-from-the-golangci-lint) with golangci-lint via go-critic.
-
-## Integrations
-
-+ [Golangci-lint has integrations with everything](https://golangci-lint.run/usage/integrations/).
-+ Gofmt is the standard and integrated in every IDE.
-+ [Gofumpt has integration with VSCode and a guide for GoLand](https://github.com/mvdan/gofumpt#installation).
-+ Other tools can be integrated into git pre-commit hooks.
-
-Frameworks for pre-commit hooks:
-
-+ [pre-commit](https://github.com/pre-commit/pre-commit) (Python)
-+ [lefthook](https://github.com/Arkweid/lefthook) (Go)
-+ [husky](https://github.com/typicode/husky) and [lint-staged](https://github.com/okonet/lint-staged) (JS)
-+ [overcommit](https://github.com/sds/overcommit) (Ruby)
+This blog post now has a new address: 
diff --git a/notes-go/monads.md b/notes-go/monads.md
index 5e9c08a..7b09c9b 100644
--- a/notes-go/monads.md
+++ b/notes-go/monads.md
@@ -1,568 +1,3 @@
 # In search of better error handling for Go
 
-This article is an exploration of how to improve error handling in Go. I address here some of the issues while others are left with open questions and some ideas. And, spoiler alert, I do love error handling in Go as it is now. I don't want to replace it with anything like exceptions, but I want to make it slightly better.
-
-I'm going to limit all solutions to what we already have in the language. I'm not talking about any language changes or making a new programming language out of Go. Everything we do here must be possible to shape into a ready-to-use pure Go package.
-
-## How error handling in Go works
-
-If you already know Go, skip to the next section.
-
-In Go, you can return multiple values from a function:
-
-```go
-func split(s string) (string, string) {
-    return "left", "right"
-}
-```
-
-It looks like something other languages call "tuple," but in Go, there is no such type you can operate on. Instead, these are two separate values returned.
-
-You cannot use it as a type anywhere but in the return type:
-
-```go
-// syntax error: expected ')', found ','
-func split(s (string, string))
-```
-
-And you cannot assign it to a single variable:
-
-```go
-// error: cannot initialize 1 variables with 2 values
-a := split("")
-```
-
-You must either assign it to separate variables:
-
-```go
-a, b := split("")
-```
-
-Or pass it into a function that accepts 2 arguments:
-
-```go
-func example(s1 string, s2 string) {}
-
-func main() {
-    example(split(""))
-}
-```
-
-This all is important because that's how error handling in Go works. The convention is to return the error as the last result value from the functions that may fail:
-
-```go
-func connect() (*Connection, error) {
-    return nil, errors.New("cannot connect")
-}
-```
-
-Note that you always need to return something as the first value, even if there is an error. The convention is to return the default value for the type: nil for pointers, 0 for integers, an empty string for strings, etc.
-
-In functions that call a function returning an error, you should explicitly check for an error and propagate it to the caller:
-
-```go
-func createUser() (*User, error) {
-    conn, err := connect()
-    if err != nil {
-        return nil, err
-    }
-    ...
-}
-```
-
-And at the very top or when applicable, you may handle the error somehow. For example, log it or show it to the user:
-
-```go
-func main() {
-    user, err := createUser()
-    if err != nil {
-        fmt.Println(err)
-        os.Exit(1)
-    }
-}
-```
-
-It might be hard to find in that way where the error occurred (imagine that we call `connect` in 10 different places), so the best practice (available since Go 1.13) is to wrap each error before returning it:
-
-```go
-if err != nil {
-    return nil, fmt.Errorf("connect to database: %v", err)
-}
-```
-
-You can think of it as building a stack trace manually.
-
-Another option for error handling is to raise them using `panic`. The panic will include the stack trace, and you can provide it with an error message to be shown. However, Go doesn't have `try/catch`, so handling such errors is hard. You may `defer` a function to be executed when leaving the current function scope. This inner function will be called even if a panic occurs, and it can call the `recover` function to stop the panic, get the panic value, and act on it somehow.
-
-Using `panic` and `recover` is convoluted and dangerous. The `panic` is usually used for errors that shouldn't occur, and `recover` is reserved for the entry point level code, like the web framework, so you won't see it often.
-
-## Why it's cool
-
-The Go error handling gets a lot of criticism for being verbose and tedious to write. However, it often pays back:
-
-1. It's **user-friendly**. When you write a CLI application for a general audience, in case of failure, you want to show a friendly and clean error message instead of a big scary traceback that makes sense only for people who know exactly how the tool works. And in Go, by design, each possible error is manually crafted to be human-readable and friendly.
-1. It's **explicit**. In languages with exceptions, the function execution can be interrupted at any point, and you have to always keep it in mind. In Go, the function returns only when you explicitly write `return` (if you don't count `panic`). That makes it always apparent at first glance which lines of code and in which scenarios might be skipped.
-
-Both exceptions and explicit error handling have their advantages and disadvantages, and programming languages usually pick one of the two, depending on their goals and application. Go is designed to be simple and explicit, even if often verbose, so here we have it. Erlang and Elixir have both types of errors, which allows for more advanced error handling and for picking what fits a specific task the best, but that has a higher learning curve and requires making a conscious choice of what should be used in each particular situation.
-
-## What's the problem
-
-I think verbosity and repetitivity are often worthy sacrifices to make for the benefits described above, and I think it fits the Go design goals quite well. No, the main problem I see is that **it's possible and easy not to handle errors**. While exceptions will always propagate and explode if not handled explicitly, unhandled errors in Go are simply discarded. Let's look at a few examples:
-
-```go
-createUser()
-```
-
-Here we called a function that may return an error, but we didn't check for it. It's also possible that when we were writing this code, `createUser` didn't return an error. Then we updated it to return one, but we forgot to check all places where it is called and add error handling explicitly.
-
-There is a linter to catch such scenarios called [errcheck](https://github.com/kisielk/errcheck), and you should certainly use it. However, linters aren't code verifiers or type checkers, and there are scenarios when they might miss something.
-
-Also, errcheck allows you to explicitly discard errors, like so:
-
-```go
-user, _ := createUser()
-```
-
-Or like this when the function returns only an `error` without a value:
-
-```go
-_ = something()
-```
-
-You may see it in the following scenarios:
-
-1. The author **assumes that the error may never happen**. In this scenario, it would be much better to `panic` if the assumption is wrong, but explicitly checking for the error and panicking is, as with all error handling in Go, tedious and verbose, and people don't want to do that for errors that "won't ever happen anyway."
-1. The author **doesn't know what to do with the error**. For example, if we cannot write a message in the log file. In that scenario, we can't log the error because the error is about the logger not working (and we'll get into an infinite loop). And we can't panic because a (probably temporary) issue with the logger (which isn't business-critical) doesn't worth breaking everything.
-1. The author **copy-pasted an example from the documentation**. Documentation authors often skip error handling to keep examples simple, and people often copy-paste these examples but forget to adjust them.
-
-So, this is what we'll try to fix. Let's try to find a way to force people to check for errors before they can work with the function return value.
-
-## Fixing unintentionally silent errors
-
-The [regexp.Compile](https://pkg.go.dev/regexp#Compile) function compiles the given regular expression and returns a pair `(*Regexp, error)`:
-
-```go
-var rex, _ = regexp.Compile(`[0-9]+`)
-```
-
-Most often, regexes are compiled at the module level at the start of the application so that when it comes to actually using the regex, it is fast. So, that means we don't have a place to propagate the error. And we don't want to ignore the error, or it will explode with a nil pointer error when we try using the regex. So, the only option left is to panic. It works quite well there because since we compile a hardcoded regex and do that at the start of the application, we know for sure it won't panic when we test it and ship it to users or production.
-
-And since this is such a common scenario, the `regexp` module provides [regexp.MustCompile](https://pkg.go.dev/regexp#MustCompile) function that does the same but panics on error instead of returning it. Here is how it looks inside:
-
-```go
-func MustCompile(str string) *Regexp {
-    regexp, err := Compile(str)
-    if err != nil {
-        panic(`regexp: Compile(` + quote(str) + `): ` + err.Error())
-    }
-    return regexp
-}
-```
-
-Writing such a wrapper for every single function in the project doesn't scale well. But, thanks to generics, we can make a generic version of the wrapper that works with any function:
-
-```go
-func Must[T any](val T, err error) T {
-    if err != nil {
-        panic(err)
-    }
-    return val
-}
-```
-
-And that's how we can use it:
-
-```go
-var rex = Must(regexp.Compile(`[0-9]+`))
-```
-
-This is one of the error-handling functions I provide in [genesis](https://github.com/life4/genesis) (the library with generic functions for Go): [lambdas.Must](https://pkg.go.dev/github.com/life4/genesis/lambdas#Must).
-
-That solves the problem of handling errors that must never occur but which we don't want to silently discard. Now, let's see how we can ensure that regular errors are properly handled.
-
-## Meet monads
-
-"Monad" is just a fancy word for "container" or "wrapper". For example, a linked list is a monad that wraps values inside and provides functions like `map` to interact with these values.
-
-The monad we're interested in today in Rust is called `Result` and in Scala is called `Try`. We'll use the Rust naming here. This monad wraps either the function result or an error. In Go, we can use the power of interfaces for that:
-
-```go
-type Result[T any] interface {
-    IsErr() bool
-    Unwrap() T
-    ErrUnwrap() error
-}
-```
-
-The successful result will be represented by a private struct `ok` constructed with the `Ok` function:
-
-```go
-type ok[T any] struct{ val T }
-
-func Ok[T any](val T) Result[T] {
-    return ok[T]{val}
-}
-
-func (r ok[T]) IsErr() bool      { return false }
-func (r ok[T]) Unwrap() T        { return r.val }
-func (r ok[T]) ErrUnwrap() error { panic("expected error") }
-```
-
-```go
-type err[T any] struct{ val error }
-
-func Err[T any](val error) Result[T] {
-    return err[T]{val}
-}
-
-func (r err[T]) IsErr() bool      { return true }
-func (r err[T]) Unwrap() T        { panic(r.val) }
-func (r err[T]) ErrUnwrap() error { return r.val }
-```
-
-Now, let's take the code from the intro and rewrite it.
-
-Classic Go:
-
-```go
-func connect() (*Connection, error) {
-    return nil, errors.New("cannot connect")
-}
-
-func createUser() (*User, error) {
-    conn, err := connect()
-    if err != nil {
-        return nil, err
-    }
-    return &User{conn}, nil
-}
-```
-
-And now with monads:
-
-```go
-func createUser() Result[User] {
-    res := connect()
-    if res.IsErr() {
-        return Err[User](res.ErrUnwrap())
-    }
-    conn := res.Unwrap()
-    return Ok[User](User{conn})
-}
-```
-
-There are a few things that can be slightly improved by adding a few more methods:
-
-1. The `ErrAs` method may be added to convert the type for an error (and panic if it's not an error). So, `Err[User](res.ErrUnwrap())` can be replaced by `res.ErrAs[User]()`.
-1. The `Errorf` method may be added to format the wrapped error with `fmt.Errorf` (and do nothing if it's not an error).
-
-But these are details.
-
-The good thing about the monadic approach is that it's impossible to use the returned value if an error occurs. The `Unwrap` method will panic if you forget to check for errors first. The bad thing about it is that it's still possible to forget to check for errors; the only difference is that it will panic instead of going unnoticed. And we don't want our code to panic. And if the error doesn't occur often enough and is not covered by tests (which is almost always the case), that panic is hard to notice until it hits the production.
-
-And also, it's even more verbose.
-
-Can we do better than that?
-
-## Try
-
-Let's talk about [the most disliked](https://github.com/golang/go/issues?q=is%3Aissue+sort%3Areactions--1-desc+is%3Aclosed) Go proposal: [A built-in Go error check function, `try`](https://github.com/golang/go/issues/32437). That's exactly how error handling works in Rust. There is a `Result` monad and a `?` postfix operator (before that, Rust also used to have a `try` macro) that propagates the error. With the proposal accepted, our code would look like this:
-
-```go
-func createUser() (*User, error) {
-    conn := try(connect())
-    return &User{conn}, nil
-}
-```
-
-It is short, it ensures that the error is not ignored, and it ensures that the value is not used if there is an error. So, why the proposal got so much negativity?
-
-1. **It was solving the wrong problem**. At the time the proposal was made (Go 1.12), the language didn't have error wrapping. And this proposal is what motivated the Go team to add `fmt.Errorf` in the very next release.
-1. **It wasn't well-communicated**. The proposal was introduced out of the blue by the Go team together with a proposal for contract-based generics, and many people perceived it as an already-made decision. There were many blog posts and comments in the community that Go is a Google language and that whatever we all think doesn't matter. Rejecting both proposals was the best decision by the Go team to not let the community fall apart.
-1. **It didn't address error-wrapping**. It was suggesting to use `defer` to wrap errors. But this point isn't hard to fix. We can just let `try` to accept as the second argument the format string to format the error.
-1. **It's one more way to interrupt the control flow**. There are a few keywords (`break`, `continue`, `return`, and the infamous `goto`) and one function (`panic`) that might interrupt the regular function flow, and adding one more function in the mix would make reading code harder.
-
-These are certainly valid points, but considering that this is how error-handling is designed in Rust and that people there love it, I'd say it doesn't deserve all the hate it gets, and having it properly presented now would go differently.
-
-Regardless, can we add something like this ourselves? Well, not exactly. The only way to interrupt function control flow from another function is with `panic`. But if we want to stop the error propagation, we need to `defer` a function that will `recover` after the panic:
-
-```go
-func createUser() (u *User, err error) {
-    defer func(){err = DontPanic()}()
-    ...
-}
-```
-
-And now we have the same problem as we had before: if you forget to add this line in a function, instead of an error, you'll get panic.
-
-## Type guards
-
-Let's look again at the first example we have of handling errors with monads:
-
-```go
-res := connect()
-if res.IsErr() {
-    return Err[User](res.ErrUnwrap())
-}
-conn := res.Unwrap()
-```
-
-The problem here is that if we look at the code, we know that inside of the `if` block, `res` has a type `err`. And since we always return from this block and there are only 2 types that implement `Result`, after this check, the `res` might be only `ok`. Can we explain it to the type checker? Then we could define safe-to-use methods that are available on the refined types but not on the `Result`:
-
-```go
-func (r ok[T]) Val() T      { return r.val }
-func (r err[T]) Val() error { return r.val }
-```
-
-In Python, we could use [typing.TypeGuard](https://docs.python.org/3/library/typing.html#typing.TypeGuard) to refine the type. So, our code would look something like this:
-
-```python
-class Result(Protocol):
-    def is_err(self) -> TypeGuard[Err]:
-        pass
-```
-
-But in Go, the only way to refine a type is to explicitly use [type switches](https://go.dev/tour/methods/16):
-
-```go
-var conn Connection
-switch res := connect().(type) {
-case ok[Connection]:
-    conn = res.Val()
-case err[Connection]:
-    return Err[User](res.Val())
-}
-```
-
-That's quite similar to what you could do in Rust:
-
-```rust
-let conn = match connect() {
-    case Ok(val) => val,
-    case Err(err) => return Err(err),
-}
-```
-
-With this approach, we are guaranteed to never get panic and only be able to unwrap the value if we check the type of the container. However, there is no "exhaustiveness check" for the `switch` statement in Go. The compiler won't tell us anything if we don't check for `ok` or for `err`. It also won't tell if we forgot to assign the unwrapped value to `conn`. In all these scenarios, we at the end get `conn` with the default value, which is the same result we get when we ignore the error in the classic approach, except now it's much more verbose.
-
-## Piping
-
-In Haskell, there are also no exceptions. But also, there is no `return`. To deal with errors, you only have monads. But these monads are powerful enough to deal with any kind of error in a very concise way. How does Haskell do that?
-
-First, meet the monad `Maybe`. It can be either `Just` containing a value or `Nothing`, which is equivalent to `nil` in Go (or `None`, or `null`, or something like this in other languages). Here is how it is defined:
-
-```haskell
-data Maybe a = Just a | Nothing
-```
-
-Now, let's make a few functions:
-
-```haskell
-data User a = User a deriving Show
-
-connect = Nothing
-user_from_conn conn = Just(User(conn))
-validate_user user = Just(user)
-
-create_user = connect >>= user_from_conn >>= validate_user
-```
-
-The operator `>>=` is where the magic happens. First, it evaluates the value on the left. If it is `Just a`, it extracts the value `a` from it and calls with it the function on the right. If the value on the left is `Nothing`, the operator doesn't call the right function, and `Nothing` is simply returned.
-
-If you call `create_user` on the code above, `connect` will return `Nothing`, so `user_from_conn` and `validate_user` aren't even called, and the function result will be `Nothing`.
-
-Here is a better example that shows how it works:
-
-```haskell
--- returns Just the given value divided by 2 if the value is even,
--- and Nothing otherwise
-half x = if even x then Just (x `div` 2) else Nothing
-
-Just 3 >>= half     -- returns `Nothing`
-Just 4 >>= half     -- returns `Just 2`
-Nothing >>= half    -- returns `Nothing` without even calling `half`
-```
-
-The operator `>>=` is so important that it is Haskell's logo. And since using it is so common, Haskell also provides a convenient syntax sugar for piping together multiple function calls:
-
-```haskell
-create_user = do
-    conn <- connect
-    user <- user_from_conn conn
-    validate_user user
-```
-
-If you want to dive deeper, the blog post [Functors, Applicatives, And Monads In Pictures](https://www.adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html) has more nice examples of defining and using monads in Haskell.
-
-Elixir doesn't use the word "monad", but it has a very similar syntax for doing about the same thing:
-
-```elixir
-with {:ok, conn} <- connect(),
-     # This line is executed only if the previous line matched
-     {:ok, user} <- user_from_conn(conn)
-     {:ok, user} <- validate_user(user)
-do
-  # This line is executed only if all lines above matched
-  user
-else
-  # This line is executed if any of the `with` matches failed
-  err -> err
-end
-```
-
-Can we do something similar in Go?
-
-```go
-Pipe(
-    connect,
-    createUser,
-    validateUser,
-)
-```
-
-Well, kinda. There is no type-safe way to implement this function. We could write something like this:
-
-```go
-func Pipe[T any](funcs ...func(T) Result[T]) Result[T] {
-    ...
-}
-```
-
-But that will require all functions to accept and return the same type. It won't even work with our example where the first function returns `Connection`, and the next one returns `User`. To solve it, we could make a separate function for each number of possible arguments:
-
-```go
-func Pipe2[T1, T2, T3 any](
-    f1 func(T1) Result[T2],
-    f2 func(T2) Result[T3],
-) Result[T3] {
-    return Err[T](errors.New(""))
-}
-```
-
-We could live with that, that's already something. But then we have a problem that all the functions now accept exactly one argument. What if we want to pass an additional argument to one of these functions? We could wrap it in a new anonymous function just for this purpose, but that's, again, very verbose.
-
-## Flat map
-
-There are a few methods that the monad has to work with the wrapped value:
-
-+ `FMap` (aka `flatMap` or `and_then`) applies the given function to the `Result` and returns the `Result` of that function.
-+ `Map` applies the given function to the Result and returns the wrapped in `Ok` result of that function.
-
-And it's quite easy to implement them:
-
-```go
-type Result[T any] interface {
-    // ...
-    FMap(func(T) Result[T]) Result[T]
-    Map(func(T) T) Result[T]
-}
-
-func (r ok[T])  FMap(f func(T) Result[T]) Result[T] { return f(r.val) }
-func (r ok[T])  Map(f func(T) T)          Result[T] { return Ok(f(r.val)) }
-func (r err[T]) FMap(func(T) Result[T])   Result[T] { return r }
-func (r err[T]) Map(func(T) T)            Result[T] { return r }
-```
-
-And that's how we could use it:
-
-```go
-// THIS CODE DOES NOT COMPILE
-func createUser() Result[User] {
-    return connect().Map(
-        func(conn Connection) User {
-            return User{}
-        },
-    )
-}
-```
-
-This code doesn't compile. The reason is that the signature of the methods requires the function passed into `Map` or `FMap` to return the same (wrapped) type as of the current monad. That would work if we'd accept and return `Connection`, but since we return `User`, the compilation fails.
-
-The current implementation of generics [does not permit parametrized methods](https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md#No-parameterized-methods). You can track the progress of the feature in the proposal: [allow type parameters in methods](https://github.com/golang/go/issues/49085). But for now, all we can do is make `Map` and `FMap` functions:
-
-```go
-func Map[T, R any](r Result[T], f func(T) R) Result[R] {
-    //Implementation is left as an exercise for the reader
-}
-
-func FMap[T, R any](r Result[T], f func(T) Result[R]) Result[R] {
-    //Implementation is left as an exercise for the reader
-}
-```
-
-And that's how we can use it:
-
-```go
-func createUser() Result[User] {
-    return Map(
-        connect(),
-        func(conn Connection) User {
-            return User{conn}
-        },
-    )
-}
-```
-
-That works quite well, especially if each step of the algorithm is a function. You'd need to keep assigning each result to a variable, though, to avoid crazy nesting. For example:
-
-```go
-key := openssl.GenerateRSAKey(2048)
-pubKey := FMap(key, MarshalPKIXPublicKeyPEM)
-return FMap(pubKey, Armor)
-```
-
-But in practice, that's a rare situation. You'll often face situations when you need to call a method instead of a function, pass additional parameters, or do something with 2 or more results. In all such cases, you'll need to wrap the operation into an anonymous function that provides the signature expected by `FMap`. And there are 2 problems with it:
-
-First, **bad performance**. Each function needs a stack, and now each line of code defines an anonymous function. I don't know how good the compiler is at inlining this stuff and how much exactly it might affect the performance, but it's worth keeping in mind.
-
-And second, **verbose syntax**. That's where we get to the most disliked open proposal (this blog post walks a dangerous path of controversy): [Lightweight anonymous function syntax](https://github.com/golang/go/issues/21498). Without it, each call to `FMap` is a chore to type and a mess to read.
-
-Compare the same example in (semi-)functional languages and in Go:
-
-Rust:
-
-```rust
-|a, b| { a + b }
-```
-
-Haskell:
-
-```haskell
-(+)
-```
-
-Elixir:
-
-```elixir
-&(&1 + &2)
-```
-
-Go:
-
-```go
-func(a int, b int) int { return a + b}
-```
-
-This is one of the small things that seem not important but are crucial for a language to be functional.
-
-## The best solution
-
-If I could shape Go to my will, I think the best solution would be to have a `Try` method on the `Result` monad that behaves pretty much like the rejected `try` proposal. However, I don't see a way to do that with the means currently available in the language, and I don't see it possible to get a proposal for it merged in the language. So, this blog post currently has more questions than answers.
-
-There are many projects that simply copy monads from other languages (the most famous one being [mo](https://github.com/samber/mo)), but I don't think it works well in the current state of Go because of all the reasons described above. However, things might change if any of these proposals (or their variation) is accepted:
-
-+ [Add sum types / discriminated unions](https://github.com/golang/go/issues/19412)
-+ [Add typed enum support](https://github.com/golang/go/issues/19814)
-+ [Allow type parameters in methods](https://github.com/golang/go/issues/49085)
-+ [Lightweight anonymous function syntax](https://github.com/golang/go/issues/21498)
-+ [A built-in Go error check function](https://github.com/golang/go/issues/32437)
-
-Until then, I don't see user-implemented monads as a good fit for Go.
-
-## What can you already use
-
-There are a few small things in this blog post that you can already bring on the production and make your code safer:
-
-+ The [errcheck](https://github.com/kisielk/errcheck) linter is a must-have for any Go project. If you don't already use it, please do. The [golangci-lint](https://golangci-lint.run/) aggregator supports it out of the box and runs by default.
-+ The `Must` function. You can copy-paste it from [genesis](https://github.com/life4/genesis/blob/v1.2.0/lambdas/errors.go). I hope one day, something similar will be added to the stdlib. Use it instead of discarding an error in every place where an error can "never" occur. Because when it does occur, you'd better know about it.
-+ Always include error handling in all code examples and documentation. That way, when people copy-paste it into their project, they don't forget to handle errors properly.
+This blog post now has a new address: 
diff --git a/notes-go/time.md b/notes-go/time.md
index 5eac9bc..d37e11c 100644
--- a/notes-go/time.md
+++ b/notes-go/time.md
@@ -1,42 +1,3 @@
 # How to work with date and time in Go
 
-## Standard library
-
-Ok, there is how you can parse and format date or time string into `time.Time` object:
-
-```go
-t = time.Parse(format, timeString)
-t.Format(format)
-```
-
-And this format is the most strange thing in Go. There is an example of a format:
-
-```
-Mon, 02 Jan 2006 15:04:05 -0700
-```
-
-My first thought was "Wow, smart Go can get an example of a time string as format". No. If you pass "2007" instead of "2006" your program will fail in runtime. It has to be the same values as in the example above.
-
-In most cases, you don't have to write these formats because Go has some constants for different time standards. For example, `UnixDate`, `RFC822`, `RFC3339`.
-
-## Parsers
-
-+ [dateparse](https://github.com/araddon/dateparse) -- parse date or time in unknown format. Can understand much formats, from the US and Chinese formats to UNIX timestamp.
-+ [when](https://github.com/olebedev/when) -- a natural language date and time parser. Has rules for English and Russian.
-
-## Formatters
-
-Let's write `2006-01-2` with different formats.
-
-+ [jodaTime](https://github.com/vjeantet/jodaTime) -- parse or format time with [yoda syntax](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html). For example, `YYYY-MM-d`.
-+ [strftime](https://github.com/awoodbeck/strftime) -- format time with [C99 syntax](https://en.cppreference.com/w/c/chrono/strftime). For example, `%Y-%m-%d`.
-+ [tuesday](https://github.com/osteele/tuesday) -- format time with [Ruby syntax](https://ruby-doc.org/core-2.4.1/Time.html#method-i-strftime). For example, `%Y-%m-%-d`.
-
-And bonus:
-
-+ [durafmt](https://github.com/hako/durafmt) -- format duration to string like "2 weeks 18 hours 22 minutes 3 seconds".
-
-## Helpers
-
-+ [now](https://github.com/jinzhu/now) -- set of functions to calculate time based on another time. For example, `now.New(t).BeginningOfMonth()` returns first second of month in moment of `t`.
-+ [timeutil](https://github.com/leekchan/timeutil) -- contains `timedelta` like in Python and some operations with time.
+This blog post now has a new address: 
diff --git a/notes-python/README.md b/notes-python/README.md
index 069bb41..c01b839 100644
--- a/notes-python/README.md
+++ b/notes-python/README.md
@@ -1,3 +1,3 @@
 # Python
 
-Notes about Python: tools, libraries, best practice.
+All the posts from this section now have a new address: 
diff --git a/notes-python/colon.md b/notes-python/colon.md
index e4b0b9d..23954e3 100644
--- a/notes-python/colon.md
+++ b/notes-python/colon.md
@@ -1,95 +1,3 @@
 # Why does Python have a colon?
 
-Python has a colon (`:`) after all statements that start a new block: `if`, `for`, `while`, `def`, `class`, `with`, `else`. For example:
-
-```python
-if a == 1:
-    b = 2
-```
-
-However, the colon looks redundant. Both a machine and a human can understand that a new block started by indentation, and you can't miss that block anyway. For the example above it could look like this:
-
-```python
-if a == 1   # SyntaxError
-    b = 2
-```
-
-So, why do we need it?
-
-## Lambda
-
-Python has this colon from the very first release v.0.9.0 in February 1991. The standard library at this point was more proof of concept what you can do on Python either something handful. Since then, only a few modules survived: [calendar](https://docs.python.org/3/library/calendar.html), [dis](https://docs.python.org/3/library/dis.html), [fnmatch](https://docs.python.org/3/library/fnmatch.html), [glob](https://docs.python.org/3/library/glob.html), `path` (now it's [os.path](https://docs.python.org/3/library/os.path.html)), [shutil](https://docs.python.org/3/library/shutil.html), and `wrandom` (now it's [random](https://docs.python.org/3/library/random.html)). And even they, of course, have changed a lot.
-
-The most interesting and useless module was `lambda.py`. It contained an implementation of some lambda calculus functions. Lambda calculus is an incredibly fun exercise when you implement the whole functional programming language using only lambdas. If you're not familiar with the conception, I recommend David Beazley's workshop [Lambda Calculus](https://youtu.be/5C6sv7-eTKg). But let's move on.
-
-Python until version 1.0.0 released in January 1994 didn't have lambda expression. So, how it could do lambda calculus without lambdas? Let's see. There is an implementation of `3` [church number](https://en.wikipedia.org/wiki/Church_encoding) from that module:
-
-```python
-def Thrice(f, x): return f(f(f(x)))
-```
-
-Right, that's just a function, but to look more like lambda it's written in one line. It's possible to parse such a case because indentation isn't the only one feat of the code block, but also we have a colon. And it works for all Python versions, from the first release. And that release used this feature a lot. Let's see a few more examples from the standard library (0.9.0).
-
-A function with inlined cycle:
-
-```python
-def norm(a, n, p):
-  a = poly.modulo(a, p)
-  a = a[:]
-  for i in range(len(a)): a[i] = mod(a[i], n)
-  a = poly.normalize(a)
-  return a
-```
-
-A fun hack to get `None` without having literal for it:
-
-```python
-# Name a constant that may once appear in the language...
-def return_nil(): return
-nil = return_nil()
-```
-
-A long chain of `if`'s from built-in text adventure:
-
-```python
-def decide(here, cmd):
-  key, args = cmd[0], cmd[1:]
-  if not args:
-    if key = N: return here.north()
-    if key = S: return here.south()
-```
-
-Important note: all examples from the earliest Python release, and nobody talked about code style at that moment. 10 years ago was introduced [PEP-8](https://www.python.org/dev/peps/pep-0008/) that recommends avoiding such inlining:
-
-> Compound statements (multiple statements on the same line) are generally discouraged.
-> While sometimes it's okay to put an if/for/while with a small body on the same line, never do this for multi-clause statements. Also avoid folding such long lines!
-
-I'd like to be stricter here: never use it. It always makes code more difficult to read. Python is a bad language for saving lines of code or even chars, you can save much more space by using Perl, for example.
-
-## ABC
-
-Python syntax was heavily influenced by [ABC](https://en.wikipedia.org/wiki/ABC_programming_language) language. And the colon is one of many things that was copied from ABC into Python. However, ABC had no such application for colon and didn't support blocks inlining. So, the real motivation behind the colon was not inlining, but something different. An example of the ABC syntax from Wikipedia:
-
-```
-HOW TO RETURN words document:
-   PUT {} IN collection
-   FOR line IN document:
-      FOR word IN split line:
-         IF word not.in collection:
-            INSERT word IN collection
-   RETURN collection
-```
-
-Let’s see papers that were published by ABC authors while the language was only in the stage of an idea, even before the first implementation (that was named `B0`). I think, “Designing a beginners’ programming language” is the most interesting and essential one, but the next statement correct for all their papers from that period. All code examples had no colon. For example:
-
-```
-FOR I OVER ROW
-  FOR J OVER COL
-    PUT 0 IN A(I, J)
-```
-
-[Lambert Meertens](https://en.wikipedia.org/wiki/Lambert_Meertens), one of the designers of ABC, had a small talk at [CWI Lectures 2019](https://www.cwi.nl/events/2019/lectures-2019/cwi-lectures-2019) about ABC and it’s an influence on Python. After the talk, I asked him about the moment when and why they came to the idea of the colon. And he told me a story.
-
-It was the late evening, they were staying around a blackboard a little bit drunk and thinking about the syntax of the future language. They wrote down the implementation of bubble sort in a few different syntaxes. And now was the question, how to choose the best one. One way is to run tests in the head, how they did it, but that was not enough for a language "designed for beginners", hey also needed a user test. So, they asked for help from a woman that worked there. I'm not sure who she was by specialty, but definitely not a developer at all. She had a long look at the blackboard and said that she doesn't understand any implementation. They explained it a bit, and she said: "Oooh, now I've got it. That `if` line not only about this line itself but also contains the code after it". They said "yes" and as solution added a colon after `if` and `for` lines. The colon is used in natural language: for an explanation, and enumeration, even in this sentence. And it made that moment clear for her.
-
-So, the colon helps to write code in one line and was added in ABC (and then in Python) for one non-developer person a long time ago to make it more similar to the natural language.
+This blog post now has a new address: 
diff --git a/notes-python/contracts.md b/notes-python/contracts.md
index 2c1030a..3fee5e7 100644
--- a/notes-python/contracts.md
+++ b/notes-python/contracts.md
@@ -1,165 +1,3 @@
 # Contract-Driven Development
 
-Let's take for example an incredibly simple code and imagine that it's incredibly complicated logic.
-
-```python
-def cat(left, right):
-    """Concatenate two given strings.
-    """
-    return left + right
-```
-
-## Tests
-
-How can we be sure this code works? No, it's not obvious. Remember the rules of the game, we have an incredibly complicated realization. So, we can't say it works or not while we haven't tested it.
-
-```python
-def test_cat():
-    result = cat(left='abc', right='def')
-    assert result == 'abcdef'
-
-```
-
-Now, run [pytest](https://docs.pytest.org/en/latest/):
-
-```bash
-pytest cat.py
-```
-
-It passes. So, our code works. Right?
-
-## Table tests
-
-Wait, but what about corner cases? What if one string is empty? What if both strings are empty? What if we have only one character in both strings? We need check more values and this is where [table driven tests](https://dave.cheney.net/2019/05/07/prefer-table-driven-tests) will save our time. In pytest, we can use [@pytest.mark.parametrize](https://docs.pytest.org/en/latest/parametrize.html#pytest-mark-parametrize) to make such tables.
-
-```python
-import pytest
-
-@pytest.mark.parametrize('left, right, expected', [
-    ('a', 'b', 'ab'),
-    ('', '', ''),
-    ('', 'b', 'b'),
-    ('a', '', 'a'),
-    ('text', 'check', 'textcheck'),
-])
-def test_cat(left, right, expected):
-    result = cat(left=left, right=right)
-    assert result == expected
-```
-
-## Properties
-
-Table tests can be enormously long, and for every test case, we have to manually calculate the expected result. For complicated code, it's a lot of work. Can we do it better and think and write less? Yes, we can instead of _expected result_ talk about _expected properties of the result_. The big difference is the result is different for different input values, but properties always the same. The coolest thing is in most cases you already know result properties, it is the business requirements, and your code is no more than the implementation of these requirements.
-
-So, what are the properties of our function?
-
-1. The result string starts with the first given string.
-1. The result string ends with the second given string.
-1. Result string has the length equal to the sum of lengths of given strings.
-
-Now, we can check these properties for the result instead of checking particular values.
-
-```python
-@pytest.mark.parametrize('left, right', [
-    ('a', 'b'),
-    ('', ''),
-    ('', 'b'),
-    ('a', ''),
-    ('text', 'check'),
-])
-def test_cat(left, right):
-    result = cat(left=left, right=right)
-    assert result.startswith(left)
-    assert result.endswith(right)
-    assert len(result) == len(left) + len(right)
-```
-
-## Hypothesis
-
-We've tested a few corner cases but not all of them. What about Unicode strings? What if one string is Unicode, but another one isn't? What about spaces? What if we have a string termination symbol somewhere? What if both strings contain only digits (the place where JS always surprises)? It's so hard to find examples for all possible cases where something can go wrong. In theory, you even can't say that it works while you haven't checked **all** possible values (that impossible even for our simple function). So, instead of trying to figure out all possible nightly values we can ask the machine to do so. This is where the [property-based testing](https://dev.to/jdsteinhauser/intro-to-property-based-testing-2cj8) comes in. In Python, we have a great tool [hypothesis](https://hypothesis.readthedocs.io/en/latest/) that can generate test examples for us:
-
-```python
-import hypothesis
-from hypothesis import strategies
-
-@hypothesis.given(left=strategies.text(), right=strategies.text())
-def test_cat(left, right):
-    result = cat(left=left, right=right)
-    assert result.startswith(left)
-    assert result.endswith(right)
-    assert len(result) == len(left) + len(right)
-```
-
-## Type annotations
-
-Another one cool thing in Python we have to talk about before moving further is [type annotations](https://dev.to/dstarner/using-pythons-type-annotations-4cfe):
-
-```python
-def cat(left: str, right: str) -> str:
-    return left + right
-```
-
-Type annotations aren't perfect and can be too complicated. However, what is most important is now humans and machines know much more about your code. You can run [mypy](https://github.com/python/mypy) and check that you haven't made type errors. And the thing is it's not only about catching type errors. Now we can use [hypothesis-auto](https://timothycrosley.github.io/hypothesis-auto/) wrapper around hypothesis. It will infer parameters types and explain names and types of parameters to Hypothesis. So, instead of writing `hypothesis.given(left=strategies.text(), right=strategies.text())` we can just say `hypothesis_auto.auto_pytest(cat)`.
-
-```python
-import hypothesis_auto
-
-@hypothesis_auto.auto_pytest(cat)
-def test_cat(test_case):
-    result = test_case()
-    left = test_case.parameters.kwargs['left']
-    right = test_case.parameters.kwargs['right']
-    assert result.startswith(left)
-    assert result.endswith(right)
-    assert len(result) == len(left) + len(right)
-```
-
-It looks longer because now parameters are placed inside the long name `test_case.parameters.kwargs` but the most important thing here is we don't talk about function inputs at all, the machine does everything. The test isn't about any values of the function anymore but only about the function properties.
-
-## Contracts
-
-Can we make it even simpler? Not really. The implementation can produce some values, and the machine can infer some properties of the result. However, someone else must say which properties are good and expected, and which are not. However, there is something else about our properties that we can do better. At this stage we have type annotations and, to be honest, they are just kind of properties. Annotations say "the result is a text", and our test properties clarify the length of the result, it's prefix and suffix. However, the difference is type annotations are the part of the function itself. It gives some benefits:
-
-1. The machine can check statically, without the actual running of the code.
-1. The human can see types (think "possible values set") for arguments and the result.
-
-Package [deal](https://github.com/life4/deal) is the thing that can make the same for our properties.
-
-```python
-import deal
-
-@deal.ensure(lambda left, right, result: result.startswith(left))
-@deal.ensure(lambda left, right, result: result.endswith(right))
-@deal.ensure(lambda left, right, result: len(result) == len(left) + len(right))
-def cat(left: str, right: str) -> str:
-    return left + right
-```
-
-Now, it's not just properties, but [contracts](https://en.wikipedia.org/wiki/Design_by_contract). They can be checked in the runtime, simplify tests, tell humans about the function behavior. And tests for this implementation are incredibly trivial:
-
-```python
-@pytest.mark.parametrize('case', deal.cases(cat))
-def test_cat(case):
-    case()
-```
-
-You can read more about writing tests for contracted code in the [deal documentation](https://deal.readthedocs.io/testing.html).
-
-## Contracts for machines
-
-The most exciting thing is deal can check contracts statically, like mypy checks annotations. However, contracts can be any code while types are [standardized and limited](https://docs.python.org/3/library/typing.html). Although the machine can't check everything (yet), it can catch some trivial cases. For example:
-
-```python
-@deal.post(lambda result: 0 <= result <= 1)
-def sin(x):
-    return 2
-```
-
-And when we run [deal linter](https://deal.readthedocs.io/linter.html) on this code, we see contract violation error:
-
-```bash
-❯ flake8 --show-source sin.py
-sin.py:6:5: DEAL011: post contract error
-    return 2
-    ^
-```
+This blog post now has a new address: 
diff --git a/notes-python/counter.md b/notes-python/counter.md
index 27377f6..c8bab66 100644
--- a/notes-python/counter.md
+++ b/notes-python/counter.md
@@ -1,216 +1,3 @@
 # Everything about Counter
 
-I think, `collections.Counter` is the most magic and powerful container in Python. This is smart and beautiful [multiset](https://en.wikipedia.org/wiki/Multiset) realization. Let's have a look at what `Counter` can do.
-
-## Basic usage
-
-Basic usage for Counter is to count some element (for example, words) in a sequence and get N most popular.
-
-```python
-from collections import Counter
-
-words = 'to be or not to be'.split()
-
-c = Counter(words)
-# Counter({'to': 2, 'be': 2, 'or': 1, 'not': 1})
-
-c.most_common()
-# [('to', 2), ('be', 2), ('or', 1), ('not', 1)]
-
-c.most_common(3)
-# [('to', 2), ('be', 2), ('or', 1)]
-```
-
-Ok, now let's dive into Counter features.
-
-## Init
-
-`Counter` is a child of `dict`, and all elements, so you can initialize it from a sequence as in "Basic usage" section or by any way how you initialize `dict`.
-
-```python
-Counter()
-Counter('gallahad')
-Counter({'a': 4, 'b': 2})
-Counter(a=4, b=2)
-Counter({1: 2, 3: 4})
-```
-
-You can use any `int` as value. Yes, zero or negative too.
-
-## Manage values
-
-You can get, set and delete values from Counter:
-
-```python
-c = Counter(first=2, second=3)
-c['junk'] = 4
-c  # Counter({'first': 2, 'second': 3, 'junk': 4})
-c['junk']
-del c['junk']
-c  # Counter({'first': 2, 'second': 3})
-```
-
-## Default value
-
-If you try to get missing value Counter returns 0 instead of raising `KeyError`:
-
-```python
-c['missing']
-# 0
-```
-
-It allows you to work with `Counter` as with `defaultdict(int)`.
-
-Use `in` if you want to check that a Counter contains a key:
-
-```python
-'missing' in c
-# False
-
-'first' in c
-# True
-```
-
-## Dict
-
-Counter has all dict methods:
-
-```python
-list(c.items())
-# [('first', 2), ('second', 3)]
-
-list(c.keys())
-# ['first', 'second']
-
-list(c.values())
-# [2, 3]
-```
-
-Method `.update()` smarter than in `dict` and can get anything that you can pass in `init`. Also, it merges values.
-
-```python
-c = Counter({'first': 1})
-c.update(Counter({'second': 2}))
-c.update({'third': 3})
-c.update(fourth=5)
-c.update(fourth=-1)
-c.update(['fifth'] * 6)
-c
-# Counter({'first': 1, 'second': 2, 'third': 3, 'fourth': 4, 'fifth': 6})
-```
-
-## Arithmetic operations
-
-```python
-c1 = Counter(first=1, common=2)
-c2 = Counter(common=3, second=4)
-
-c1 + c2
-# Counter({'first': 1, 'common': 5, 'second': 4})
-
-c1 - c2
-# Counter({'first': 1})
-
-c2 - c1
-# Counter({'common': 1, 'second': 4})
-```
-
-As you can see, arithmetic operations drop negative values.
-
-```python
-Counter(a=-2) - Counter(a=-1)
-# Counter()
-
-Counter(a=-2) + Counter(a=-1)
-# Counter()
-```
-
-## Set operations
-
-Counter supports set operations:
-
-```python
-c1 = Counter(first=1, common=2)
-c2 = Counter(common=3, second=4)
-
-c1 & c2
-# Counter({'common': 2})
-
-c2 & c1
-# Counter({'common': 2})
-
-c1 | c2
-# Counter({'first': 1, 'common': 3, 'second': 4})
-
-c2 | c1
-# Counter({'common': 3, 'second': 4, 'first': 1})
-```
-
-Union (`|`)
-
-This operations also drop non-positive values:
-
-```python
-Counter(a=-1) | Counter(b=-2)
-# Counter()
-```
-
-## A little bit more about non-positive values
-
-From source code:
-
-```python
-# Outputs guaranteed to only include positive counts.
-# To strip negative and zero counts, add-in an empty counter:
-c += Counter()
-```
-
-You can use this magic to drop or flip negative values:
-
-```python
-+Counter(a=-1)
-# Counter()
-
-+Counter(a=1)
-# Counter({'a': 1})
-
--Counter(a=-1)
-# Counter({'a': 1})
-
--Counter(a=1)
-# Counter()
-```
-
-## Get elements
-
-Some ways to get elements from Counter:
-
-```python
-c = Counter(first=1, second=2, third=3)
-
-list(c.items())
-# [('first', 1), ('second', 2), ('third', 3)]
-
-# iterator over elements repeating each as many times as its count
-list(c.elements())
-# ['first', 'second', 'second', 'third', 'third', 'third']
-
-c.most_common()
-# [('third', 3), ('second', 2), ('first', 1)]
-
-c.most_common(2)
-# [('third', 3), ('second', 2)]
-```
-
-## Conclusion
-
-`Counter` is:
-
-* Dictionary with a default value,
-* Supports set and arithmetic operations,
-* Can count elements in sequence fast because of C realization of this function,
-* Can return N or all elements sorted by value,
-* Can merge 2 or more Counters,
-* Drops negative values.
-
-That's beautiful, isn't it?
+This blog post now has a new address: 
diff --git a/notes-python/iterators.md b/notes-python/iterators.md
index a3dbd4f..5bd290d 100644
--- a/notes-python/iterators.md
+++ b/notes-python/iterators.md
@@ -1,157 +1,3 @@
 # About iterators and iterables
 
-## How to create your own iterator or iterable?
-
-In Python, we have two complementary terms: iterator and iterable.
-
-An **iterable** is an object that has an `__iter__` method which returns an iterator, or which defines a  `__getitem__` method that can take sequential indexes starting from zero (and raises an IndexError when the indexes are no longer valid). So, you get the iterator from the iterable object. By default `__iter__` always returns `self`.
-
-An **iterator** is an object with a `__next__` method.
-
-
-## How to get an iterator from iterable?
-
-You can get iterator from any iterable via `iter` function:
-
-```python
-In [1]: i = iter([1, 2])
-
-In [2]: i
-Out[2]: 
-```
-
-You can iterate it manually via `next` function:
-
-```python
-In [3]: next(i)
-Out[3]: 1
-
-In [4]: next(i)
-Out[4]: 2
-
-In [5]: next(i)
-StopIteration:
-```
-
-Many functions, such as `map`, `functools.reduce`, `itertools.product` etc, return iterator:
-
-```python
-In [14]: m = map(str, range(3))
-
-In [15]: next(m)
-Out[15]: '0'
-
-In [16]: m
-Out[16]: 
-```
-
-
-## How to get iterable form iterator?
-
-You can convert iterator to another iterable what you want:
-
-```python
-In [23]: list(iter([1, 2, 3]))
-Out[23]: [1, 2, 3]
-
-In [24]: tuple(iter([1, 2, 3]))
-Out[24]: (1, 2, 3)
-
-In [25]: set(iter([1, 2, 3]))
-Out[25]: {1, 2, 3}
-```
-
-
-## What about `range`?
-
-`range` is not an iterator. It is iterable:
-
-```python
-In [17]: r = range(10)
-
-In [18]: next(r)
-TypeError: 'range' object is not an iterator
-
-In [19]: i = iter(r)
-
-In [20]: next(i)
-Out[20]: 0
-```
-
-## Why we need iterators?
-
-Firstly, iterators save your memory:
-
-```python
-In [10]: import sys
-
-In [11]: sys.getsizeof(iter(range(10)))
-Out[11]: 48
-
-In [12]: sys.getsizeof(iter(range(1000)))
-Out[12]: 48
-
-In [13]: sys.getsizeof(list(range(1000)))
-Out[13]: 9112
-
-```
-
-Also, sometimes we don't need all elements from iterable. For example, `in` stops iteration when element is found , `any` stops on the first `True` and `all` stops on the first `False`.
-
-```python
-In [6]: %timeit 10 * 4 in [i for i in range(10 * 10)]
-2.81 µs ± 11.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
-
-In [7]: %timeit 10 * 4 in (i for i in range(10 * 10))
-2.07 µs ± 13.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
-```
-
-## Why we need iterable?
-
-Iterable, as opposite to the iterator, can implement some useful methods for better performance. For example, method `__in__`for `in` operaton:
-
-```python
-In [26]: %timeit 10 ** 8 in range(10 ** 10)
-623 ns ± 0.89 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
-
-In [27]: %timeit 10 ** 8 in iter(range(10 ** 10))
-5.22 s ± 5.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
-```
-
-## Why this is important?
-
-You can iterate through iterator only once:
-
-```python
-In [14]: r = range(5)
-
-In [15]: list(r)
-Out[15]: [0, 1, 2, 3, 4]
-
-In [16]: list(r)
-Out[16]: [0, 1, 2, 3, 4]
-
-In [17]: i = iter(range(5))
-
-In [18]: list(i)
-Out[18]: [0, 1, 2, 3, 4]
-
-In [19]: list(i)
-Out[19]: []
-```
-
-Iterables can waste your memory:
-
-```python
-In [21]: list(range(10 ** 10))
-MemoryError:
-```
-
-## Further reading
-
-1. Stack Overflow: [What exactly are iterator, iterable, and iteration?](https://stackoverflow.com/questions/9884132/what-exactly-are-iterator-iterable-and-iteration)
-1. Python etc: [How to make iterator and iterable](https://t.me/pythonetc/149)
-1. Documentation: [About iterators](https://docs.python.org/3/tutorial/classes.html#iterators) and [generators](https://docs.python.org/3/tutorial/classes.html#generators)
-1. Documentation: [iterator API](https://docs.python.org/dev/library/stdtypes.html#iterator-types)
-1. Documentation: [Iterators HOWTO](https://docs.python.org/dev/howto/functional.html#iterators)
-1. ITGram: [WTF Python](https://t.me/itgram_channel/32)
+This blog post now has a new address: 
diff --git a/notes-python/lint-n-format.md b/notes-python/lint-n-format.md
index 36b7ebe..dd18f42 100644
--- a/notes-python/lint-n-format.md
+++ b/notes-python/lint-n-format.md
@@ -1,49 +1,3 @@
 # Python linters and formatters
 
-## Linters
-
-+ [pyflakes](https://github.com/PyCQA/pyflakes) -- checks only obvious bugs and never code style. There are no opinionated checks. Pyflakes should be enabled in any project, and all errors must be fixed.
-+ [pycodestyle](https://github.com/PyCQA/pycodestyle) -- most important code style checker. Controls compatibility with [PEP-8](https://www.python.org/dev/peps/pep-0008/) that is standard de-facto for how Python code should look like. Initially, the tool was called pep8, but [renamed after Guido's request](https://github.com/PyCQA/pycodestyle/issues/466).
-+ [flake8](https://gitlab.com/pycqa/flake8) is the most famous Python linter. First of all, it is a framework allowing you to write custom linters that can be configured and run in a unified way. Pyflakes and pycodestyle are default dependencies of Flake8. If you just install and run Flake8 in a clean environment, you'll see their checks.
-+ [flakehell](https://github.com/life4/flakehell) is a wrapper around flake8. It provides additional commands and features, nicer output, and more control over plugins and checks.
-+ [PyLint](https://github.com/PyCQA/pylint) is an alternative linter with many checks that are missed in flake8 plugins. Some of them are opinionated and can be difficult to satisfy. However, most of the checks are useful. FlakeHell supports PyLint as a plugin. The important thing to remember is that PyLint is slow because of heavy inference of types and values.
-+ [wemake-python-styleguide](https://github.com/wemake-services/wemake-python-styleguide) is a flake8/flakehell plugin, providing a lot of checks. It is fast, strict, and helpful, finds many bugs, style issues, enforces consistency. It is very opinionated, so you probably want to disable some (most of the) checks.
-
-See [awesome-flake8-extensions](https://github.com/DmytroLitvinov/awesome-flake8-extensions) for more plugins.
-
-## Type annotations
-
-Python is a dynamically typed language. It makes development (and especially rapid prototyping) fast and easy but can complicate maintenance and lead to unexpected bugs in production. The solution is to add type annotations that can be later statically analyzed. In short, it is great, and you should give it a try. I won't dive deep into this topic here. Instead, there are a few helpful links to get started with type annotations:
-
-+ [typing module documentation](https://docs.python.org/3/library/typing.html)
-+ [Mypy documentation](https://mypy.readthedocs.io/en/stable/index.html)
-+ [Type hints cheat sheet](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html)
-+ [awesome-python-typing](https://github.com/typeddjango/awesome-python-typing)
-
-## Formatters
-
-Python has quite a few code formatters with different code style and philosophy behind.
-
-There are 3 all-in-one code formatters, all of them are supported by VSCode out of the box:
-
-+ [autopep8](https://github.com/hhatto/autopep8) is the oldest and the least opinionated Python code formatter. It formats the code to follow [PEP-8](https://www.python.org/dev/peps/pep-0008/) and nothing else. Under the hood, it uses the mentioned above [pycodestyle](https://github.com/PyCQA/pycodestyle). So, if the project passes pycodestyle (or flake8) checks, you can safely use autopep8.
-+ [black](https://github.com/python/black) is "uncompromising" and opinionated code formatter. The code style is close to PEP-8 (there are few exceptions) but also it has an opinion about pretty much everything. It has some issues that make it a bad choice for an experienced team. However, it can be a good choice for an inexperienced team, an open-source project, or for quick formatting of an old and dirty code. See [Don't use Black in your team](https://articles.orsinium.dev/p/notes-python/black/) for more information.
-+ [yapf](https://github.com/google/yapf) is a code formatter from Google. Like black, it reformats everything. The main difference is that every small detail in yapf is configurable. It makes sense to use yapf for a project with a code style that is different from PEP-8. However, if you have a choice, prefer using PEP-8 for all projects.
-
-A few small but helpful formatters:
-
-+ [isort](https://github.com/PyCQA/isort) groups and sorts imports. Usually, the imports section in Python is quite messy, and isort brings an order here. It is a powerful tool and every stylistic decision there can be configured. Use isort.
-+ [add-trailing-comma](https://github.com/asottile/add-trailing-comma) adds trailing commas to multiline function calls, function signatures, and literals. Also, it fixes indentation for closing braces.
-+ [autoflake](https://github.com/myint/autoflake) removes unused imports and variables. It helps clean up a messed code.
-+ [docformatter](https://github.com/myint/docformatter) formats docstrings according to [PEP-257](https://www.python.org/dev/peps/pep-0257/).
-+ [pyupgrade](https://github.com/asottile/pyupgrade) changes the code to use newer Python features. It will replace old comprehensions style, old formatting via `%`, drop Unicode and long literals, simplify `super` calls, and much more.
-+ [unify](https://github.com/myint/unify) formats string literals to use one style of quotes (single or double).
-
-See [awesome-python-code-formatters](https://github.com/life4/awesome-python-code-formatters) for more tools.
-
-## Integrations
-
-+ Flake8 is famous and has integration (or a hacky described somewhere way to integrate) with everything.
-+ [FlakeHell can pretend to be flake8 for integrations](https://flakehell.readthedocs.io/ide.html)
-+ VSCode Python extension supports flake8, pylint, isort, autopep8, black, and yapf out of the box. Just select what you want to use.
-+ To integrate something else, like a less popular code formatter, use [pre-commit](https://github.com/pre-commit/pre-commit).
+This blog post now has a new address: 
diff --git a/notes-python/logging.md b/notes-python/logging.md
index cbfc7e6..3fb7507 100644
--- a/notes-python/logging.md
+++ b/notes-python/logging.md
@@ -1,132 +1,3 @@
 # Beautiful logging in Python
 
-Hi everyone. I'm Gram. Today I wanna talk to you about the bad, the ugly and the good python logging.
-
-First, I want to say a few words about logging structure.
-
-+ Loggers expose the interface that application code directly uses. Logger defines set of handlers.
-+ Handlers send the log records to the appropriate destination. Handler has list of filters and one formatter.
-+ Filters filter log records.
-+ Formatters specify the layout of log records in the final output.
-
-Ok, how to configure it?
-
-The bad. Just call some functions and methods. Exactly like in official documentation. Never do it. Never.
-
-```python
-import logging
-
-logger = logging.getLogger('spam_application')
-logger.setLevel(logging.DEBUG)
-
-fh = logging.FileHandler('spam.log')
-fh.setLevel(logging.DEBUG)
-
-ch = logging.StreamHandler()
-ch.setLevel(logging.ERROR)
-
-formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
-
-fh.setFormatter(formatter)
-ch.setFormatter(formatter)
-
-logger.addHandler(fh)
-logger.addHandler(ch)
-```
-
-Next one, logging can be configured via ugly ini config.
-
-```toml
-[loggers]
-keys=root,simpleExample
-
-[handlers]
-keys=consoleHandler
-
-[formatters]
-keys=simpleFormatter
-
-[logger_simpleExample]
-level=DEBUG
-handlers=consoleHandler
-qualname=simpleExample
-propagate=0
-
-[handler_consoleHandler]
-class=StreamHandler
-level=DEBUG
-formatter=simpleFormatter
-args=(sys.stdout,)
-
-[formatter_simpleFormatter]
-format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
-
-```
-
-The good. Configure it via dict config. It's readable and little bit extendable. It's exactly how Django does it.
-
-```python
-LOGGING = {
-    'version': 1,
-    'disable_existing_loggers': True,
-    'formatters': {
-        'simple': {
-            'format': '%(levelname)s %(message)s'
-        },
-    },
-    'filters': {
-        'special': {
-            '()': 'project.logging.SpecialFilter',
-            'foo': 'bar',
-        }
-    },
-    'handlers': {
-        'console':{
-            'level':'DEBUG',
-            'class':'logging.StreamHandler',
-            'formatter': 'simple'
-        },
-    },
-    'loggers': {
-        'myproject.custom': {
-            'handlers': ['console', 'mail_admins'],
-            'level': 'INFO',
-            'filters': ['special']
-        }
-    }
-}
-
-```
-
-The perfect. Store this dict in toml file. It's readable, extandable, standartized.
-
-```toml
-version = 1
-disable_existing_loggers = false
-
-[formatters.json]
-format = '%(levelname)s %(name)s %(module)s %(lineno)s %(message)s'
-class = 'pythonjsonlogger.jsonlogger.JsonFormatter'
-
-[filters.level]
-"()" = "logging_helpers.LevelFilter"
-
-[handlers.stdout]
-level = "DEBUG"
-class = "logging.StreamHandler"
-stream = "ext://sys.stdout"
-formatter = "simple"
-filters = ["level"]
-
-[handlers.json]
-level = "DEBUG"
-class = "logging.StreamHandler"
-stream = "ext://sys.stdout"
-formatter = "json"
-
-[loggers.project]
-handlers = ["stdout"]
-level = "DEBUG"
-```
-
-And there is how you can use it. Thank you.
+This blog post now has a new address: 
diff --git a/notes-python/packaging.md b/notes-python/packaging.md
index 31133bb..3cabc12 100644
--- a/notes-python/packaging.md
+++ b/notes-python/packaging.md
@@ -1,150 +1,3 @@
 # Python packaging for your team
 
-I love [decoupling](https://bit.ly/2m07ZOj). This makes the project maintaining easier. We have 2 main ways to do it:
-
-1. [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules). This is a good conception but sometimes very confusable. Also, you must commit updates in parent project for each submodule changing.
-2. Packaging. I think this solution is better because you already use many packages in your project. You can easily package your project and explain this concept to any junior.
-
-This article about creating [python package](https://packaging.python.org/) without pain.
-
-
-## Setuptools
-
-Most python packages which you are using contain [setup.py](https://packaging.python.org/tutorials/packaging-projects/#creating-setup-py) file in its root directory. This file describes package name, version, requirements (required third-party packages), package content and some optional information. Just call `setuptools.setup(...)` with this info as kwargs. It's enough for package distribution. If you have setup.py then you already can distribute it. For example, upload into [pypi.org](https://pypi.org/).
-
-
-## Pip and virtualenv
-
-[Pip](https://pip.pypa.io/en/stable/) -- de facto standard approach for installing python packages in your system. Simple and well known.
-
-By default, pip installs all packages for all users in the system and requires root privileges for it. [Don't sudo pip](https://pages.charlesreid1.com/dont-sudo-pip/). Use virtualenv to install packages into isolated environments. Besides security troubles, some packages may have incompatible required versions of some mutual package.
-
-Also I recommend to use [pipsi](https://github.com/mitsuhiko/pipsi) for some global entry points like [isort](https://github.com/timothycrosley/isort). Yeah, pipsi uses virtualenv.
-
-
-
-## Editable packages
-
-Sometimes you want to get actual package version directly from your other repository. This is very useful for non-distributable projects. Setuptools doesn't support it, but you can do it via pip:
-
-```bash
-pip install -e git+git@bitbucket.org:...git@master#egg=package_name
-```
-
-And you can pin this and any other requirements into [requirements.txt](https://caremad.io/posts/2013/07/setup-vs-requirement/):
-
-```
--e git+git@bitbucket.org:...git@master#egg=package_name
-...
-deal
-Django>=1.11
-...
-```
-
-Also, pip supports constraints.txt with the same syntax for pinning versions for optional dependencies:
-
-```
-djangorestframework>=3.5
-```
-
-To install these dependencies just pass them into pip:
-
-```bash
-pip install -r requirements.txt -c constraints.txt
-```
-
-Requirements.txt is very useful when you don't want to create setup.py for your internal projects.
-
-
-## Pip-tools
-
-In most commercial projects you have at least 2 environments:
-
-1. Development. Here you can get last available packages versions, develop and test project with it.
-2. Production. Here you are able to create the same environment as where you're testing this code. And these requirements must not be updated while you're not testing and improving your code for changes in new versions. Also, other developers are able to get the same environment as you, because it's already tested and this approach is saving their time.
-
-[Pip-tools](https://github.com/jazzband/pip-tools) provide some tools for this conception.
-
-
-## Pipfile and pipenv
-
-Pip developers decided to [improve requirements.txt](https://github.com/pypa/pip/issues/1795) for grouping dependencies and enabling native support for version locking. As a result, they have created [Pipfile specification](https://github.com/pypa/pipfile) and [pipenv](https://docs.pipenv.org/) -- a tool for working with it. Pipenv can lock versions in [Pipfile.lock], manage virtual environments, install and resolve dependencies. So cool, but for distributable packages, you must duplicate main dependencies into setup.py.
-
-
-## Poetry
-
-[Poetry](https://github.com/sdispater/poetry) -- beautiful alternative to setuptools and pip. You can just place all package info and all requirements into [pyproject.toml](https://poetry.eustace.io/docs/pyproject/). That's all. Beautiful. But poetry has some problems:
-
-1. It's not compatible with setuptools. As a result, your users can't install your project without poetry. Everybody have setuptools, but many users don't know about poetry. You can use it for your internal projects, but poetry can't install dependencies from file or repository without `pyproject.toml` generating. Yeah, if you fork and improve some project, you must make [sdist](https://docs.python.org/3/distutils/sourcedist.html) for any changes and bump version for all projects that depend on it. Or manually convert project's `setup.py` to `pyproject.toml`.
-2. Poetry [doesn't creates a virtual environment](https://poetry.eustace.io/docs/basic-usage/#poetry-and-virtualenvs) if you already into virualenv. So, poetry doesn't create an environment if you install poetry via pipsi. Pipenv, as opposed to poetry, always creates a virtual environment for a project and [can choose the right python version](https://docs.pipenv.org/advanced/#automatic-python-installation).
-3. Poetry use [version specifiers](https://poetry.eustace.io/docs/versions/#version-constraints) incompatible with [PEP-440](https://www.python.org/dev/peps/pep-0440/#version-specifiers). This makes me sad.
-
-For backward compatibility, you can generate setup.py and requirements.txt from pyproject.toml via [poetry-setup](https://github.com/orsinium/poetry-setup).
-
-
-## Flit
-
-If pyproject.toml is cool, why only poetry use it? Not only. [Flit](https://github.com/takluyver/flit) supports pyproject.toml too. This is a very simple tool with only 4 commands:
-
-1. **init** -- interactively create pyproject.toml.
-2. **build** -- make sdist or wheel.
-3. **publish** -- upload package into PyPI (or another repository).
-4. **install** -- install a local package into the current environment.
-
-That's all. And enough in common cases. Flit uses pip for packages installation. And flit listed in [alternatives by PyPA](https://packaging.python.org/key_projects/?#flit). As in poetry, you need to manage virtual environments by other tools. But this package has a significant disadvantage: flit can't lock dependencies.
-
-
-## Let's make the best packaging for your team!
-
-All solutions below have some problems. Let's fix it!
-
-
-### Poetry based packaging
-
-1. Always create a virtual environment for each project. I recommend to use [pew](https://github.com/berdario/pew) or [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) for better experience.
-2. Use [pyenv](https://github.com/pyenv/pyenv) or [pythonz](https://github.com/saghul/pythonz) for python versions managing. Also, I recommend try to use [pypy](https://pypy.org/) for some of your small and well-tested projects. It's really fast.
-3. Sometimes you want to depend on some setuptools-based projects on some package. Use [poetry-setup](https://github.com/orsinium/poetry-setup) for compatibility with it.
-
-
-### Pipfile or requirements.txt based packaging
-
-As we remember, with pipenv we need to duplicate all requirements in old format for setup.py. Let's improve it! I've created [install-requires](https://github.com/orsinium/install-requires) project that can help you convert requirements between formats. But which format should you choose?
-
-1. `requirements.txt`. This is the most popular requirements format for projects. Anyone can use it as they want.
-2. `Pipfile.lock`. Pipenv has better requirements lock than pip-tools, and you should use it for better security. But if you want to create a package from a project, then you plan to use this package into other projects. So, if this project depends on more than one package with locked requirements, pipenv can't resolve these dependencies. For example, one package lock `Django==1.9` version, but other uses `Django==1.11`. Do not use it for distributable packages: PyPA recommends to [place not locked versions into install_requires](https://packaging.python.org/discussions/install-requires-vs-requirements/).
-3. `Pipfile`. This is our choice. Modern format with some problems, but also with many features. And, most importantly, simple and comfortable. I recommend using it for internal python packages in your company.
-
-Install-requires repository contains [example](https://github.com/orsinium/install-requires/blob/master/example/setup.py) how you can convert requirements from Pipfile to setup.py compatible format on the fly.
-
-
-## Setuptools is dead?
-
-Many developers (me too) love poetry because it uses beautiful project metadata describing format as setup.py alternative. But setuptools allows you to [use setup.cfg instead of setup.py](https://setuptools.readthedocs.io/en/latest/setuptools.html#configuring-setup-using-setup-cfg-files) and it's also beautiful. Furthermore, [isort](https://github.com/timothycrosley/isort) and [flake8](http://flake8.pycqa.org/en/latest/) supports setup.cfg too.
-
-Setuptools supports requirements from VCS, file or archive via [dependency_links parameter](https://setuptools.readthedocs.io/en/latest/setuptools.html#dependencies-that-aren-t-in-pypi). And requirements grouping via [extras_require](https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies).
-
-So, what's wrong with setuptools? I think this tool has some problems:
-
-1. No native virtualenv and python version managing. And poetry can't do it too. But we have pew, virtualenvwrapper, pyenv, pythonz, and many other useful tools. This is UNIX-way.
-2. No dependencies locking. Poetry, pipenv, and pip (`pip freeze`) can lock dependencies from its own files. Setuptools can't. This is because of setuptools for packages, not projects.
-3. Setup.cfg is good, but [pyproject.toml better](https://github.com/pypa/packaging-problems/issues/29#issuecomment-375845650). Setuptools [will support pyproject.toml](https://github.com/pypa/setuptools/issues/1160) and deprecate setup.py and setup.cfg. Also [pip will support it too](https://pip.pypa.io/en/stable/reference/pip/?highlight=pyproject.toml%20#build-system-interface). And it's cool!
-
-
-## Further reading
-
-1. For pros and cons see issues for projects from article:
-    1. [poetry](https://github.com/sdispater/poetry/issues?q=is%3Aopen+is%3Aissue+label%3Aenhancement)
-    1. [pipenv](https://github.com/pypa/pipenv/issues?q=is%3Aopen+is%3Aissue+label%3Aenhancement)
-    1. [setuptools](https://github.com/pypa/setuptools/issues?q=is%3Aopen+is%3Aissue+label%3Aenhancement)
-1. [How to install other Python version](https://realpython.com/installing-python/) (sometimes you don't need pyenv).
-1. [Installing packages using pip and virtualenv](https://packaging.python.org/guides/installing-using-pip-and-virtualenv/).
-1. [Beautiful example for setuptools configuring via setup.cfg](https://github.com/4383/sampleproject/blob/c503301e4b381790e5a9125c3dd636921052e8e1/setup.cfg).
-1. [Setuptools documentation](https://setuptools.readthedocs.io/en/latest/setuptools.html).
-1. [install_requires vs requirements files](https://packaging.python.org/discussions/install-requires-vs-requirements/).
-
-
-## Further reading (RUS)
-
-1. [About poetry](https://t.me/itgram_channel/152).
-1. [About PyPy and python dev environment](https://t.me/itgram_channel/97).
-1. [About toml format](https://t.me/itgram_channel/113).
+This blog post now has a new address: 
diff --git a/notes-python/reddit.md b/notes-python/reddit.md
index 9b30946..274b7eb 100644
--- a/notes-python/reddit.md
+++ b/notes-python/reddit.md
@@ -1,164 +1,3 @@
 # Analyzing Reddit posts
 
-## Dataset
-
-First of all, we need a dataset. We could use the Reddit API but it has quite a small number of posts you can retrieve. Luckily, you can find a dump of everything from Reddit at [files.pushshift.io/reddit](https://files.pushshift.io/reddit/). Let's download a few datasets:
-
-```bash
-wget https://files.pushshift.io/reddit/submissions/RS_2020-02.zst
-wget https://files.pushshift.io/reddit/submissions/RS_2020-03.zst
-```
-
-Next, we need to read the data and select only subreddits and columns we're interested in. Every dataset takes a lot even compressed (over 5 Gb), and uncompressed will take much more, up to 20 times. So, instead, we will read every line one by one, decide if we need it, and only then process. We can do it using [zstandard](https://pypi.org/project/zstandard/) library (and [tqdm](https://tqdm.github.io/) to see how it is going).
-
-```python
-from datetime import datetime
-import json
-import io
-
-import zstandard
-from tqdm import tqdm
-
-paths = [
-    '/home/gram/Downloads/RS_2020-02.zst',
-    '/home/gram/Downloads/RS_2020-03.zst',
-]
-subreddits = {'python', 'datascience'}
-posts = []
-
-for path in paths:
-    with open(path, 'rb') as fh:
-        dctx = zstandard.ZstdDecompressor()
-        stream_reader = dctx.stream_reader(fh)
-        text_stream = io.TextIOWrapper(stream_reader, encoding='utf-8')
-        for line in tqdm(text_stream):
-            post = json.loads(line)
-            if post['subreddit'].lower() not in subreddits:
-                continue
-            posts.append((
-                datetime.fromtimestamp(post['created_utc']),
-                post['domain'],
-                post['num_comments'],
-                post['id'],
-                post['score'],
-                post['subreddit'],
-                post['title'],
-            ))
-```
-
-In the real world, you'd better use [NamedTuple](https://docs.python.org/3/library/typing.html#typing.NamedTuple) to store filtered records. However, it's ok to sacrifice readability for simplicity for one-time scripts like this.
-
-On my machine, it took about half an hour to complete. So, take a break.
-
-## Pandas
-
-Let's convert the filtered data into a [pandas](https://pandas.pydata.org/) data frame:
-
-```python
-import pandas
-df = pandas.DataFrame(posts, columns=['created', 'domain', 'comments', 'id', 'score', 'subreddit', 'title'])
-df.head()
-```
-
-At this point, we can save the data frame, so later we can get back to work without the need to filter data again:
-
-```python
-# dump
-df.to_pickle('filtered.bin')
-# load
-df = pandas.read_pickle('filtered.bin')
-```
-
-## Number
-
-Let's see some numbers. Feel free to play with the data as you like. For example, this is the percent of posts with the rating above a threshold:
-
-```python
-threshold = 5
-subreddit = 'python'
-
-(df[df.subreddit.str.lower() == subreddit.lower()].score > threshold).mean()
-```
-
-## Table
-
-Now, we'll make a new dataset where the amount of total and survived (having the rating above 5) posts is calculated for every hour:
-
-```python
-# filter the subreddit
-df2 = df[df.subreddit.str.lower() == subreddit.lower()]
-# leave only the hour and the flag if the post is survived
-df2 = pandas.DataFrame(dict(
-    hour=df2.created.apply(lambda x: x.hour),
-    survived=df2.score > threshold,
-))
-# group by hour, find how many survived and how many in total posts in every hour
-df2 = df2.groupby(['hour'], as_index=False)
-df2 = pandas.DataFrame(dict(
-    hour=range(24),
-    survived=df2.survived.sum().survived,
-    total=df2.count().survived,
-))
-```
-
-## Charts
-
-Now, let's draw charts. This is what you need:
-
-+ [Jupyter Lab](https://jupyterlab.readthedocs.io/en/stable/) to easier display and debug the charts.
-+ [plotnine](https://plotnine.readthedocs.io/en/stable/) to draw.
-
-Chart for total and survived posts:
-
-```python
-import plotnine as gg
-(
-    gg.ggplot(df2)
-    + gg.theme_light()
-    + gg.geom_col(gg.aes(x='hour', y='total', fill='"#3498db"'))
-    + gg.geom_col(gg.aes(x='hour', y='survived', fill='"#c0392b"'))
-    # make a custom legend
-    + gg.scale_fill_manual(
-        name=f'rating >{threshold}',
-        guide='legend',
-        values=['#3498db', '#c0392b'],
-        labels=['no', 'yes'],
-    )
-    + gg.xlab('hour (UTC)')
-    + gg.ylab('posts')
-    + gg.ggtitle(f'Posts in /r/{subreddit} per hour\nand how many got rating above {threshold}')
-)
-```
-
-Chart for ratio:
-
-```python
-(
-    gg.ggplot(df2)
-    + gg.theme_light()
-    + gg.geom_col(gg.aes(x='hour', y=f'survived / total * 100'), fill="#c0392b")
-    + gg.geom_text(
-        gg.aes(x='hour', y=1, label='survived / total * 100'),
-        va='bottom', ha='center', angle=90, format_string='{:.0f}%', color='white',
-    )
-    # scale the chart by oy to be always 0-100%
-    # so charts for different subreddits can be visually compared
-    + gg.ylim(0, 100)
-    + gg.xlab('hour (UTC)')
-    + gg.ylab(f'% of posts with rating >{threshold}')
-    + gg.ggtitle(f'Posts in /r/{subreddit} with rating >{threshold} per hour')
-)
-```
-
-## Results
-
-There is what I've got for some subreddits.
-
-![r/datascience](./assets/datascience-total.png)
-![r/datascience](./assets/datascience-ratio.png)
-
-![r/python](./assets/python-total.png)
-![r/python](./assets/python-ratio.png)
-
-![r/golang](./assets/golang-total.png)
-![r/golang](./assets/golang-ratio.png)
+This blog post now has a new address: 
diff --git a/notes-python/round.md b/notes-python/round.md
index 2a8a53f..d572e24 100644
--- a/notes-python/round.md
+++ b/notes-python/round.md
@@ -1,87 +1,3 @@
 # Everything about round()
 
-(This post was originally published in [@pythonetc](https://t.me/pythonetc/325) telegram channel)
-
-`round` function rounds a number to a given precision in decimal digits.
-
-```python
->>> round(1.2)
-1
->>> round(1.8)
-2
->>> round(1.228, 1)
-1.2
-```
-
-Also you can set up negative precision:
-
-```python
->>> round(413.77, -1)
-410.0
->>> round(413.77, -2)
-400.0
-```
-
-`round` returns value of input number's type:
-
-```python
->>> type(round(2, 1))
-
-
->>> type(round(2.0, 1))
-
-
->>> type(round(Decimal(2), 1))
-
-
->>> type(round(Fraction(2), 1))
-
-```
-
-For your own classes you can define `round` processing with `__round__` method:
-
-```python
->>> class Number(int):
-...   def __round__(self, p=-1000):
-...     return p
-...
->>> round(Number(2))
--1000
->>> round(Number(2), -2)
--2
-```
-
-Values are rounded to the closest multiple of `10 ** (-precision)`. For example, for `precision=1` value will be rounded to multiple of 0.1 (`round(0.63, 1)` returns 0.6). If two multiples are equally close, rounding is done toward the even choice:
-
-```python
->>> round(0.5)
-0
->>> round(1.5)
-2
-```
-
-Sometimes rounding of floats can be a little bit surprising:
-
-```python
->>> round(2.85, 1)
-2.9
-```
-
-This is because most decimal fractions [can't be represented exactly as a float](https://docs.python.org/3.7/tutorial/floatingpoint.html):
-
-```python
->>> format(2.85, '.64f')
-'2.8500000000000000888178419700125232338905334472656250000000000000'
-```
-
-If you want to round half up you can use `decimal.Decimal`:
-
-```python
->>> from decimal import Decimal, ROUND_HALF_UP
->>> Decimal(1.5).quantize(0, ROUND_HALF_UP)
-Decimal('2')
->>> Decimal(2.85).quantize(Decimal('1.0'), ROUND_HALF_UP)
-Decimal('2.9')
->>> Decimal(2.84).quantize(Decimal('1.0'), ROUND_HALF_UP)
-Decimal('2.8')
-```
+This blog post now has a new address: