[ { "title": "Node.js is easy: 4 steps to a REST API", "date": "2020-06-28T00:00:00.000Z", "slug": "/posts/nodejs-is-actually-really-easy/index", "content": "# Node.js is easy: 4 steps to a REST API\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/nodejs-is-actually-really-easy/index)_\n\n\n\n![Landing image](https://dotmethod.me/posts/nodejs-is-actually-really-easy/index/landing.png)\n\nJavascript is a great scripting language. It's easy to learn the basics, it's vastly popular, has a huge ecosystem of frameworks, plugins, packages... And it runs everywhere, including server-side.\n\nIn this post I'll show you how easy it is to get started from zero to a REST API.\n\nLet's go do it!\n\n## Step 1: Install Node.js\n\nTakes a minute to do. You go to [nodejs.org](https://nodejs.org/), download and install it. Done!\n\n## Step 2: Initiate project\n\nIn a terminal, navigate to a location of your choice.\n\n```bash\ncd C:/workspace # for Windows\ncd ~/workspace # for Mac & Linux\n```\n\nAnd execute this command with `npm` (the nodejs package manager):\n\n```bash\nnpm init\n```\n\n_Enter the project information as you wish_\n\n## Step 3: Create a js file\n\nYour entire web-app could technically run from as little as one javascript file. So start simple, start from 1 file: `index.js`\n\n```js\nconst express = require(\"express\");\nconst app = express();\nconst port = 3000;\n\napp.get(\"/\", (req, res) => res.send(\"Hello World!\"));\n\napp.listen(port, () =>\n console.log(`Server listening at http://localhost:${port}`)\n);\n```\n\nYou'll notice on the first line, the name `express`, as an external dependancy. This is a good time to install it. In the terminal:\n\n```bash\nnpm install express\n```\n\n## Step 4: Run it\n\nAgain, in the terminal, run:\n\n```bash\nnode index.js\n```\n\nNow open a browser and navigate to [http://localhost:3000](http://localhost:3000)\n\nThat's it, you're basically done. You can expand this API by adding new endpoints such as:\n\n```js\napp.get(\"/user\", (req, res) => {\n res.send({ message: \"json body\" });\n});\n```\n\nFrom here you get the point... Expand the service, grow it, add your business logic, etc. There's more to say about it, but this should get you started with Node.js for now.\n" }, { "title": "You'll want to use it all day: Weather in the terminal", "date": "2020-06-30T00:00:00.000Z", "slug": "/posts/weather-in-the-terminal/index", "content": "# You'll want to use it all day: Weather in the terminal\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/weather-in-the-terminal/index)_\n\n\n\n![Screenshot of weather report](https://dotmethod.me/posts/weather-in-the-terminal/index/landing.png)\n\nOne command and you can get the weather in any terminal:\n\n```bash\ncurl wttr.in\n```\n\nHow awesome is that?\n\nThe project and the documentation is available on [GitHub](https://github.com/chubin/wttr.in). In reality you can do a number of things with this API\n\n```bash\ncurl wttr.in\ncurl wttr.in/Copenhagen\ncurl wttr.in/:help\ncurl wttr.in/Copenhagen?format=3\ncurl v2.wttr.in # my favourite\ncurl wttr.in/:help\n```\n\nAll in all, great project, really useful tool, I use it every day. Hope you find it useful too.\n" }, { "title": "How to install Go on Linux (and Mac) in 4 easy steps", "date": "2020-07-02T00:00:00.000Z", "slug": "/posts/install-go-on-linux-and-mac-easy/index", "content": "# How to install Go on Linux (and Mac) in 4 easy steps\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/install-go-on-linux-and-mac-easy/index)_\n\n\n\nFirst off, go to the official download pages - [https://golang.org/dl/](https://golang.org/dl/). Right-click and copy the link address from the option that matches your operating system (mac, linux).\n\n![Screenshot of go download page](https://dotmethod.me/posts/install-go-on-linux-and-mac-easy/index/screenshot1.png)\n\nOpen up a terminal in the `/tmp` directory, then download the link that you just copied:\n\n```bash\ncd /tmp\nwget https://golang.org/dl/go1.14.4.linux-amd64.tar.gz\n```\n\nNext up, un-archive the binaries and move them to the `/usr/local` directory\n\n```bash\nsudo tar -xvf go1.14.4.linux-amd64.tar.gz\nsudo mv go /usr/local\n```\n\nLastly, open up your `.profile` file (or `.bash_profile`) with your favourite text editor, and copy in the following:\n\n```bash\nexport PATH=$PATH:/usr/local/go/bin\n```\n\n## The end!\n\nCheck that your install worked by opening up a new terminal and trying a basic go command, such as:\n\n```bash\ngo version\n```\n" }, { "title": "Pass: the free, secure password manager", "date": "2020-09-03T00:00:00.000Z", "slug": "/posts/pass-linux-password-manager/index", "content": "# Pass: the free, secure password manager\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/pass-linux-password-manager/index)_\n\n\n\nHow do you manage your passwords? Google's password service? 1Password? LastPass? None of the above?\n\nOh well.. all of that is bloat anyways. Here's how you do it in the terminal, easy and securely.\n\n## 1. Install pass\n\nDepending what OS you're on, install pass with your package manager. I'll assume we're on Ubuntu, but you can do the same on OSX, Arch Linux, Debian, etc..\n\n```bash\nsudo apt-get install pass\n```\n\n## 2. Generate a GPG key\n\nIn order to encrypt your passwords, you need to generate a GPG key.\n\n```bash\ngpg --gen-key\n```\n\nYou don't need to worry about what it is, you just need to know that it's the secret to opening your passwords. So you lose the key, you lose your passwords.\n\nAnother thing to pay attention to - memorize the password that you created for the GPG key with. Once again, you lose this master password, you lose all the passwords.\n\nTo summarize:\n\n- Keep the **GPG** key **safe**\n- **Remember** your master **password**\n\n...or else you **lose** everything ☠️\n\n## 3. Create a password store\n\nThis is where your passwords will be securely stored.\n\n**Use the same email address** that you used when creating the GPG key in step 2.\n\n```bash\npass init myemail@example.com\n```\n\n## 4. Put a password in the store\n\nLet's say... a password under the name of `facebook`, for your facebook account\n\n```bash\npass add facebook\n```\n\n## 5. Check that you did a good job\n\nTry to see what's inside the password store:\n\n```bash\npass\npass ls # does the same thing\npass list # also does the same thing\n```\n\nCopy out the passord\n\n```bash\npass -c facebook\n```\n\n## You're done!\n\nThat's it. Rinse and repeat, add more passwords, and keep them safe. You can even generate random secure passwords so that you don't have to remember them:\n\n```bash\npass generate gmail\n```\n\nAnd there's many other things you can do with it. Use the `--help` command for convenience:\n\n```bash\npass --help\n```\n" }, { "title": "Poetry: Python package manager for pro projects", "date": "2021-02-27T00:00:00.000Z", "slug": "/posts/poetry-python-package-manager/index", "content": "# Poetry: Python package manager for pro projects\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/poetry-python-package-manager/index)_\n\n\n\n![Cover image](https://dotmethod.me/posts/poetry-python-package-manager/index/cover.png)\n\n##### TLDR: Python package management is a mess. Poetry can fix that.\n\nIf you've ever worked with python before, you may have struggled to wrap your head around all the different ways that you can manage python external dependancies.\n\nIt's one of the most annoying things about the python development experience. Out of the box, when you install python, you get `pip`, which you can use to install packages... globally. But what if you run multiple python projects on your machine? What if your different projects have incompatible package versions?\n\nThe python answer to this issue - virtual environments. At this point, you'll find yourself utterly confused - what the fuck are virtual environments? what is it venv? what is it virtualenv?\n\nThen you get to the topic of package managers - should it be conda? should it be pipenv?\n\n## Forget it all. Poetry is the way\n\n**Definition:** \"Poetry is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you.\"\n\nIn short, poetry helps you with:\n\n- A simple package management cli\n- A simple virtual environment management\n- Isolated packages\n- Repeatable installs and builds\n- All in one experience\n\nSeriously, poetry is the only tool you'll need to manage your python codebases.\n\n## 1. Install\n\nFirst off, poetry needs to be installed. No biggie, it's a one-off job.\n\nSee the installation instructions [here](https://python-poetry.org/docs/#installation)\n\nFor linux or mac, installation can be done via:\n\n```bash\ncurl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -\n\n```\n\n## 2. Start a new project\n\nCreate a new python project, via the poetry cli:\n\n```bash\n# create new project\npoetry new demo\n\n# or initiate poetry for an existing codebase\npoetry init\n```\n\n## 3. Install a dependancy\n\nSo far so good. Now let's install a dependancy - any dependancy. I chose numpy for this example:\n\n```bash\npoetry add numpy\n```\n\nIf you have a look at the project file structure, you'll notice a new file called `pyproject.toml`:\n\n```toml\n[tool.poetry]\nname = \"demo\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Mihai Nueleanu \"]\n\n[tool.poetry.dependencies]\npython = \"^3.8\"\nnumpy = \"^1.20.1\"\n\n[tool.poetry.dev-dependencies]\n\n[build-system]\nrequires = [\"poetry-core>=1.0.0\"]\nbuild-backend = \"poetry.core.masonry.api\"\n```\n\nThis file will be the place where poetry will store a record of what packages it's supposed to install, along side with a bit of metadata about the project itself.\n\n_Note: You'll also notice another file called `poetry.lock`, written in a format that's more jibber-jabber than pyproject.toml. Don't worry about it.. it's not meant to be touched, it's only there to help poetry install the right things._\n\n## 4. A quick test drive\n\nNow, bring up you favorite text editor, create a python file, and try to see if the new package is usable. Here's a quick example, in a file called `hello.py`:\n\n```python\nfrom numpy import random\n\nprint(random.randn(5, 5))\n```\n\nYou can then run the script from any terminal:\n\n```bash\n> poetry run python hello.py\n[[ 0.50567771 -1.13465036 1.11417991 0.0550343 0.76588176]\n [-0.2489556 1.43252123 1.70904471 1.28042324 1.512682 ]\n [-0.6974979 0.19129948 -1.01325838 0.83965527 -0.35822316]\n [ 1.97695199 -0.2095286 -0.60275442 0.57499226 -0.99219837]\n [ 0.3240001 -1.01672498 0.4284231 -1.13982977 -1.30249861]]\n```\n\n## 5. In conclusion\n\nPoetry is easy to get started with. And if you incorporate it in your development workflow, it will save you hefty amounts of headache, as well make things more transparent in your project.\n\nYour projects will be repeatable, and consistent across environments, whether you're running inside a docker container, classic virtual machine, or any other hosting platform.\n\nAnother thing is - I just scratched the surface of all the goodies that come with poetry. Here's a quick glance, of some other stuff poetry can do:\n\n```bash\n> poetry --help\n\nAVAILABLE COMMANDS\n about Shows information about Poetry.\n add Adds a new dependency to pyproject.toml.\n build Builds a package, as a tarball and a wheel by default.\n cache Interact with Poetry's cache\n check Checks the validity of the pyproject.toml file.\n config Manages configuration settings.\n debug Debug various elements of Poetry.\n env Interact with Poetry's project environments.\n export Exports the lock file to alternative formats.\n help Display the manual of a command\n init Creates a basic pyproject.toml file in the current directory.\n install Installs the project dependencies.\n lock Locks the project dependencies.\n new Creates a new Python project at .\n publish Publishes a package to a remote repository.\n remove Removes a package from the project dependencies.\n run Runs a command in the appropriate environment.\n search Searches for packages on remote repositories.\n self Interact with Poetry directly.\n shell Spawns a shell within the virtual environment.\n show Shows information about packages.\n update Update the dependencies as according to the pyproject.toml file.\n version Shows the version of the project or bumps it when a valid bump rule is provided.\n```\n\nThat's all for now. Get coding!\n" }, { "title": "Nodejs + Typescript + Redis Cache = <3", "date": "2021-03-05T00:00:00.000Z", "slug": "/posts/nodejs-typescript-redis-cache/index", "content": "# Nodejs + Typescript + Redis Cache = <3\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/nodejs-typescript-redis-cache/index)_\n\n\n\n![Cover image](https://dotmethod.me/posts/nodejs-typescript-redis-cache/index/cover.png)\n\nIn this post I want to share a simple redis-based cache layer, which you can put in front of various workloads in order to either save execution time, or compute resources.\n\n## Usage example\n\nHow will we use this cache?\n\nLet's start with an example. Say we have a \"products\" endpoint which returns the products which should be displayed as recommendations to the user on our fictitious online shop.\n\nHere's the example in practice:\n\n```ts\n// Define the cache\nconst cache = new RedisCache(60);\n\napp.get(\"/products/recommended\", async (req: Request, res: Response) => {\n // Cache by userId as key\n const products = await cache.get(req.userId, () => {\n // Here's the function which refreshes the cache\n return RecommendationModel.find(req.userId)\n )};\n\n res.send(products);\n});\n```\n\nFirstly, we want to define the cache with a \"time to live\" for each value we put in. Secondly, we want to cache each recommendation list, by the user id. And then finally we want to give the cache a way to refresh if the value is not cached yet.\n\nSounds simple? Let's do it!\n\n## The dependancies\n\nFor this exercise, we only want to pull in 1 package - redis. So in a command line, we do:\n\n```bash\nnpm install redis\n```\n\n## The implementation\n\n```ts\nimport { RedisClient, createClient } from \"redis\";\nimport { env } from \"../env\";\n\nexport class RedisCache {\n private readonly cache: RedisClient;\n private ttl: number;\n\n constructor(ttl: number) {\n // [1] define ttl and create redis connection\n this.ttl = ttl;\n this.cache = createClient({\n host: env.REDIS_HOST,\n password: env.REDIS_PASSWORD,\n });\n\n this.cache.on(\"connect\", () => {\n console.log(`Redis connection established`);\n });\n\n this.cache.on(\"error\", (error) => {\n console.error(`Redis error, service degraded: ${error}`);\n });\n }\n\n // [2] generic function, takes `fetcher` argument which is meant to refresh the cache\n async get(key: string, fetcher: () => Promise): Promise {\n // [3] if we're not connected to redis, bypass cache\n if (!this.cache.connected) {\n return await fetcher();\n }\n\n return new Promise((resolve, reject) => {\n this.cache.get(key, async (err, value) => {\n if (err) return reject(err);\n if (value) {\n // [4] if value is found in cache, return it\n return resolve(JSON.parse(value));\n }\n\n // [5] if value is not in cache, fetch it and return it\n const result = await fetcher();\n this.cache.set(\n key,\n JSON.stringify(result),\n \"EX\",\n this.ttl,\n (err, reply) => {\n if (err) return reject(err);\n }\n );\n return resolve(result);\n });\n });\n }\n\n // [6]\n del(key: string) {\n this.cache.del(key);\n }\n\n flush() {\n this.cache.flushall();\n }\n}\n```\n\nAlright, now let's break it down.\n\n[1] First off, notice the class definition `RedisCache`. It has as a constructor argument a ttl (time to live), which is meant for deciding how long the cache should be valid for. Which is quite a convenient setup, for instance if you want different instances of this cache, with different TTL configurations.\n\n[2] Secondly, we define a generic `get` function, which conveniently returns a promise with the same generic type we've put in. Notice also the `fetcher` function which is passed as an argument - this function is the way we can refresh the cache, in case the value is not yet stored, or the previous value has already expired.\n\n[3] In case the redis cache is not connected (for example if the connection is in an error state), we \"fail\" gracefully by simply returning the original `fetcher` function - which essentially means we bypass the cache.\n\n[4] We try to see if the key exists in the cache. If it does exist, we return the value.\n\n[5] If the key does not exist in the cache, we first execute the `fetcher` function to fetch the value that we're trying to cache. We then save the this value in the cache, and as a last step, we return it.\n\nThat is all. Enjoy!\n" }, { "title": "Favorite Metallica song from albums 1-5", "date": "2021-03-06T00:00:00.000Z", "slug": "/posts/favorite-metallica-song-from-each-album/index", "content": "# Favorite Metallica song from albums 1-5\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/favorite-metallica-song-from-each-album/index)_\n\n\n\nA list of my favorite Metallica songs, one from each album, from albums 1 to 5. As a disclaimer: I'm not saying with definitive conviction that this list is 100% reflective of my favoritism for all eternity. I think that all tracks on these 5 albums have merit, and it could very well happen that would feel differently on another day.\n\nHowever, this post is about the songs that have really resonated with me, while listening to them over the years. And it's an attempt at describing (very shortly) the stories and the feelings that these songs bring out in me.\n\n#### Kill 'Em All - Motorbreath\n\n\n\nA real banger of a song. Electrifying and utterly overwhelming speed and energy. If it'd be a steak, it'd be so raw it still ran around in the field.\n\nThis song is simple and to the point: you do anything you want and you don't let anything or anyone stand in your way. Or you'll blast right through them. You don't give a fuck, you're young and are willing to risk it all.\n\nI still get goosebumps listening to this song. It's a real kick in the guts.\n\n#### Ride the Lightning - The Call of Ktulu\n\n\n\nLast track of the album, and the most sophisticated they've ever put out, up to that point in a sea surrounded by heavy metal and trash. This instrumental takes you on a journey, or rather an emotional rollercoaster.\n\nYou start with quiet, ominous peace, diving into unsettling wonderment and then straight into bone-crushing struggle. Then the struggle turns to fight, and then you fight some more. It's relentless, it keeps pushing and pushing, and you push back.\n\nThen comes the plot twist. You have a choice to make - do you have what it takes to make the final strike? You reflect, you look deep down into the abyss below your feet.\n\nYou reach absolution. Closure, the grand finale.\n\n#### Master of Puppets - Orion\n\n\n\nFade in.\n\nYou're on a mission. Calculated, robotic, stepping towards the target. Stumbling, but you're still on your feet. A moment of intensity comes your way and you look it straight in the eye with cool composure, as you blast through unabated. You've become good at this fight, you've perfected it to the point of selfish pleasure.\n\nThen, an interruption. A moment of purity, beauty; a moment of love. Tears roll down in awe of what's come across your path. All is harmony - nature, spirit, human. Keep moving forward and you will flourish, you think.\n\nBut over the edge of the cliff it goes and disintegrates you into pieces. You tumble down alongside. You feel the rage and it's back into the trenches with you. And so it goes... Fade out..\n\n#### Garage Days Re-Revisited - Helpless\n\n\n\nA cover album which is a bit of a throwback to Metallica's roots and inspiration. Helpless by Diamond Head, in Metallica's adaptation is a really tight track. It has energy, it has youth, it has ass-kickin'.\n\nMy favorite moment of this track comes right around minute 4:00, and this moment builds up to this absolutely explosion of double bass drums, which keeps the energy going and going, all until the bitter end.\n\n#### ...And Justice For All - To Live Is to Die\n\n\n\nThis song comes with a story. A story of struggle, tyranny and redemption. One of the most touching tracks I've ever listened to, and a really underrated gem.\n\nIt all starts peaceful. Harmonies are singing the tranquillity of existence. Natural order is queen, and human spirit is free.\n\nThen tyranny shows it's ugly face. Man is subjugated, broken down, humiliated and ran down into the ground to his last breath. And then he's beaten down some more. The tyrant is cold, calm and composed. Tyranny strikes man again and again, with a punch, a snap, and a kick, slapped around a few more times. There's a push back, there's a fight. But regardless, it's all the same, tyranny won't loosen its grip so easy.\n\nTyranny is here to stay. Days pass, months pass, even years pass. All hope has been long forgotten. Or has it?\n\nMan pulls himself up, once again - broken, beaten, scarred. His spirit is still inside, alive and kicking. One last stand, one last push. The resistance is rough, and with a blow to the head, man is disoriented, his heart is singing with hopes of tranquility once again, but his body's broken. Will he have the will to bring himself back again? Perhaps.\n\nMan wipes the blood off his lips and looks up at the enemy of his existence. How much has he endured? How much has he suffered under tyranny? Man is determined, full of rage, and full of hope.\n\n\"When a man lies, he murders some part of the world. These are the pale deaths which men miscall their lives. All this I cannot bear to witness any longer. Cannot the Kingdom of Salvation take me home?\"\n\nThe fight moves on and revenge is boiling in the man's blood. Nothing can stop him, he's got nothing left, he's got nothing to lose. And the man pummels at the tyrant. The spectacle is gruesome. Man strikes away harder and harder, smacking tyranny out of his land and out of his soul. It goes on and on, revenge has finally come, and man can't stop the punching. Deranged laughter comes out in brutal joy as he strikes on onto the tyrant. He's prevailed, but it's not enough. It's a brutal show.\n\n...\n\nTranquility has come over the land once again and the sun is rising now, after many years of darkness. Man is not the same. The fight has taken him across into the darkest corners of his soul, and has brought out the worst in his humanity. But his people are now safe and free. He may be broken and his soul may be in pieces, but he's guarded the future of his own kind. He's far from good, but he is at peace.\n\nFade out. Harmonies. Ominous harmonies.\n" }, { "title": "Minimalist man pages with TLDR", "date": "2021-03-10T00:00:00.000Z", "slug": "/posts/linux-tldr/index", "content": "# Minimalist man pages with TLDR\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/linux-tldr/index)_\n\n\n\n## Installation\n\n```shell\nnpm install -g tldr\n```\n\n## Example\n\nHow does `tar` work again? I always forget...\n\n```shell\ntldr tar\n```\n\n**Result:**\n\n```shell\n tar\n\n Archiving utility.\n Often combined with a compression method, such as gzip or bzip2.\n More information: https://www.gnu.org/software/tar.\n\n - [c]reate an archive from [f]iles:\n tar cf target.tar file1 file2 file3\n\n - [c]reate a g[z]ipped archive from [f]iles:\n tar czf target.tar.gz file1 file2 file3\n\n - [c]reate a g[z]ipped archive from a directory using relative paths:\n tar czf target.tar.gz --directory=path/to/directory .\n\n - E[x]tract a (compressed) archive [f]ile into the current directory:\n tar xf source.tar[.gz|.bz2|.xz]\n\n - E[x]tract a (compressed) archive [f]ile into the target directory:\n tar xf source.tar[.gz|.bz2|.xz] --directory=directory\n\n - [c]reate a compressed archive from [f]iles, using [a]rchive suffix to determine the compression program:\n tar caf target.tar.xz file1 file2 file3\n\n - Lis[t] the contents of a tar [f]ile [v]erbosely:\n tar tvf source.tar\n\n - E[x]tract [f]iles matching a pattern:\n tar xf source.tar --wildcards \"*.html\"\n```\n\nLanding page: [https://tldr.sh/](https://tldr.sh)\nGitHub page: [https://github.com/tldr-pages/tldr](https://github.com/tldr-pages/tldr)\n\nThat's it, keeping it minimalist.\n" }, { "title": "Port forward via ssh", "date": "2021-03-14T00:00:00.000Z", "slug": "/posts/ssh-port-forward/index", "content": "# Port forward via ssh\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/ssh-port-forward/index)_\n\n\n\n![Cover image](https://dotmethod.me/posts/ssh-port-forward/index/cover.png)\n\nYou might have found yourself in this situation before. You have a remote server, on which you've installed some software/services which you can manage via a web application.\n\nCould be a server management tool, or remote pi hole installation, or syncthing...\n\nNo matter how secure it claims to be, this app does **NOT** need to be online (on the internet) for you to use it. How about if you could just access this remote service **only** from you local machine? And all without having to have to manage some apache server, whitelisting IPs, or using Basic Authentication to shield the webserver?\n\nWell, there's a way to do all of this via SSH port forwarding (tunnelling). Which could be very convenient, since you're likely already using ssh to administer the machine in some way or another. It's very simple, here's how you do it:\n\n```bash\n# the example:\nssh -L 8080:localhost:8080 mn@example.com\n\n# the recipe:\n# [1] - local port\n# [2] - host you want to connect to once you're on the remote\n# [3] - remote port\n# [4] - remote address (ssh address)\nssh -L [1]:[2]:[3] [4]\n```\n\nEssentially what's happening, is that you're connecting to a remote host via ssh, and tunnelling local traffic to/from an arbitrary host on a specified port.\n\nThat's all. Enjoy!\n" }, { "title": "How can I use my gpg key with other devices?", "date": "2021-03-16T00:00:00.000Z", "slug": "/posts/pass-password-manager-share-gpg-key/index", "content": "# How can I use my gpg key with other devices?\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/pass-password-manager-share-gpg-key/index)_\n\n\n\n![Cover image](https://dotmethod.me/posts/pass-password-manager-share-gpg-key/index/cover.jpg)\n\nSo, let's say you've setup pass (the password manager) on your computer. But then, what if you want the same password manager on your phone? How about if you have a second computer that you want to share the passwords with?\n\nIn this post I won't go into the specifics of using pass on other platforms, **but** I will share a quick and simple example of how you could share your GPG key between devices.\n\n## 1. List your keys\n\nFirst off, list the keys on the local device\n\n```shell\ngpg -k\n```\n\n**Example result:**\n\n```shell\n/home/myuser/.gnupg/pubring.kbx\n---------------------------\npub rsa3072 2020-07-11 [SC] [expires: 2022-07-11]\n GPG_KEY_ID_WHICH_SHOULD_BE_PRETTY_LONG\nuid [ultimate] Mihai Nueleanu \nsub rsa3072 2020-11-16 [E] [expires: 2022-11-16]\n```\n\n## 2. Export the keys\n\nWith the key ID from the previous step, run the following export commands:\n\n```shell\ngpg --export-secret-key -a GPG_KEY_ID_WHICH_SHOULD_BE_PRETTY_LONG > private_key.asc\ngpg --export -a GPG_KEY_ID_WHICH_SHOULD_BE_PRETTY_LONG > public_key.asc\n```\n\n## 3. Move your keys to the new device\n\nUse a USB stick, or some similar **local** means to copy the keys from one device to the other. Now, warning time:\n\n**DO NOT SHARE THESE KEYS OVER THE INTERNET!!**\n**IT WOULD BE A REALLY BAD IDEA TO SEND THESE KEYS OVER THE INTERNET, BECAUSE YOU RISK BEING COMPROMISED**\n\n## 4. Import the key\n\nOn the secondary device, copy over the key and make sure not to leave copies of it hanging on the USB stick (or similar).\n\nRun the import command:\n\n```\ngpg --import private_key.key\n```\n\nThat should do it. Enjoy!\n" }, { "title": "GitHub Actions & K8S: build and deploy", "date": "2021-03-26T00:00:00.000Z", "slug": "/posts/github-actions-kubernetes-build-and-deploy/index", "content": "# GitHub Actions & K8S: build and deploy\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/github-actions-kubernetes-build-and-deploy/index)_\n\n\n\nHow can you use Github Actions to build your code into containers and ship them to Kubernetes?\n\n![cover](https://dotmethod.me/posts/github-actions-kubernetes-build-and-deploy/index/cover.png)\n\nHere's one of my favorite pipelines, using Github Actions for tagging and building and DockerHub for hosting container images.\n\n{% raw %}\n\n```yaml\nname: Auto deployment\non:\n push:\n branches:\n - master\njobs:\n deploy:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n node-version: [12.x]\n steps:\n - name: Checkout Git repo\n uses: actions/checkout@master\n\n # Version bump\n - name: Automated Version Bump\n id: versionBump\n uses: TriPSs/conventional-changelog-action@v3\n with:\n github-token: ${{ secrets.GITHUB_TOKEN }}\n tag-prefix: \"\"\n skip-on-empty: \"false\"\n skip-version-file: true\n - name: Automated GitHub Release\n uses: actions/create-release@v1\n if: ${{ steps.versionBump.outputs.skipped == 'false' }}\n env:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n with:\n tag_name: ${{ steps.versionBump.outputs.tag }}\n release_name: ${{ steps.versionBump.outputs.tag }}\n body: ${{ steps.versionBump.outputs.clean_changelog }}\n\n # Docker build\n - name: Set up Docker Buildx\n id: buildx\n uses: docker/setup-buildx-action@master\n - name: Cache Docker layers\n uses: actions/cache@v2\n with:\n path: /tmp/.buildx-cache\n key: ${{ runner.os }}-buildx-${{ github.sha }}\n restore-keys: |\n ${{ runner.os }}-buildx-\n - name: Login to Docker Hub\n uses: docker/login-action@v1\n with:\n username: ${{ secrets.DOCKER_HUB_USERNAME }}\n password: ${{ secrets.DOCKER_HUB_PASSWORD }}\n - name: Build and push\n id: docker_build\n uses: docker/build-push-action@v2\n with:\n context: ./\n file: ./Dockerfile\n builder: ${{ steps.buildx.outputs.name }}\n push: true\n tags: /:release-${{ steps.versionBump.outputs.tag }}\n cache-from: type=local,src=/tmp/.buildx-cache\n cache-to: type=local,dest=/tmp/.buildx-cache\n\n # Deploy to kubernetes repo\n - name: Install SSH Key\n uses: shimataro/ssh-key-action@v2\n with:\n key: ${{ secrets.KUBERNETES_SSH_KEY_PRIV }}\n known_hosts: |\n github.com ssh-rsa (...public ssh key...)\n - name: update repo\n run: |\n # VARIABLES\n TAG=\"${{ steps.versionBump.outputs.tag }}\"\n URL=\"git@github.com:GithubOrganization/kubernetes.git\"\n\n # SETUP\n git config --global user.email \"robot@example.com\"\n git config --global user.name \"Robot\"\n git clone $URL\n cd kubernetes\n\n # CHANGES\n sed -i \"s/release-.*$/release-$TAG/\" .//deployment.yaml\n\n # PUSH\n git remote set-url origin $URL\n git add .\n git commit -m \"Release version $TAG\"\n git push\n```\n\n{% endraw %}\n" }, { "title": "Async Python: fire and forget method", "date": "2021-03-27T00:00:00.000Z", "slug": "/posts/python-async-fire-and-forget/index", "content": "# Async Python: fire and forget method\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/python-async-fire-and-forget/index)_\n\n\n\n## The decorator method\n\n```python\ndef fire_and_forget(f):\n from functools import wraps\n\n @wraps(f)\n def wrapped(*args, **kwargs):\n loop = asyncio.get_event_loop()\n if callable(f):\n return loop.run_in_executor(None, f, *args, **kwargs)\n else:\n raise TypeError('Task must be a callable')\n return wrapped\n```\n\n## An example:\n\nUse the method above as a decorator for other methods:\n\n```python\n@fire_and_forget\nasync hello_world():\n sleep(5)\n print(\"Successful\")\n```\n" }, { "title": "Ubuntu desktop - my install checklist", "date": "2021-04-11T00:00:00.000Z", "slug": "/posts/ubuntu-desktop-install-checklist/index", "content": "# Ubuntu desktop - my install checklist\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/ubuntu-desktop-install-checklist/index)_\n\n\n\nGiven a brand new ubuntu installation, here's my first half-hour on the new system:\n\n- Install system updates \n- Install git\n- Install VSCode\n- Install [Brave browser](https://brave.com/linux/)\n- Install zsh & [ohmyzsh](https://ohmyz.sh/#install)\n- Add a new ssh key to GitHub account\n- Clone my dotfiles\n- Install [tilda](https://github.com/lanoxx/tilda)\n- Install [pass](/posts/pass-linux-password-manager/)\n- Clone my password store\n- Install kubectl\n- Install solaar\n- Install [key-mapper](https://github.com/sezanzeb/key-mapper.git)\n- Install python & nodejs" }, { "title": "Simple Plausible Analytics on Kubernetes", "date": "2021-04-18T00:00:00.000Z", "slug": "/posts/plausible-analytics-kubernetes/index", "content": "# Simple Plausible Analytics on Kubernetes\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/plausible-analytics-kubernetes/index)_\n\n\n\n![cover](https://dotmethod.me/posts/plausible-analytics-kubernetes/index/cover.png)\n\nHere's a simple, self-hosted configuration for Plausible Analytics, meant for deploying to Kubernetes.\n\n**Note:** All the code and instructions have been uploaded to github: https://github.com/dotmethodme/plausible-kubernetes\n\n## How to\n1. Go to https://github.com/dotmethodme/plausible-kubernetes and clone the repository. All yaml files are inside the folder named `base`\n2. Open `postgress.yaml`. Edit the `POSTGRES_PASSWORD` field and set a randomly generated password for the postgres database\n3. Open `ingress.yaml`. Insert your chosen domain name into the host fields \n4. Open `secret.yaml`. Configure the marked fields.\n5. Apply the configuration with `kubectl apply -f ./base`\n\n## Read more:\n- [Full instructions and yaml files](https://github.com/dotmethodme/plausible-kubernetes)\n- [The official plausible self-hosted docs](https://plausible.io/docs/self-hosting)\n- [The official configuration options for plausible](https://plausible.io/docs/self-hosting-configuration)" }, { "title": "Kubernetes generate user account and config", "date": "2021-04-19T00:00:00.000Z", "slug": "/posts/kubernetes-generate-config-and-manage-access/index", "content": "# Kubernetes generate user account and config\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/kubernetes-generate-config-and-manage-access/index)_\n\n\n\n![Cover](https://dotmethod.me/posts/kubernetes-generate-config-and-manage-access/index/cover.png)\n\nHow do you generate a kubernetes user account? How do you get access to your kubernetes cluster? How can you generate a kubernetes config file?\n\nIntuitively, this should be pretty simple. However, in practice, the process is quite a bit convoluted.\n\nHowever, I have automated it for myself. Below is the script, responsible for issuing cluster access, together with a kube config file, from start to finish.\n\n**Prerequisites:**\n- [openssl](https://github.com/openssl/openssl)\n- [kubectl](https://kubernetes.io/docs/tasks/tools/)\n\n## How to use it:\n- paste the script into a `.sh` file (e.g. `config-generate.sh`)\n- replace occurrences of `myuser` with the name of your user account (can be anything)\n- the script will finish by filling in the new config details in your kube config file: `~/.kube/config`\n\n## The Script\n\n```shell\n# This script is responsible for issuing a cluster access \n# config file, which can afterwards be used by users or \n# service integrations (such as github actions)\n\n# Generate a key and a certificate signing request\n# Hint: The CN field is important\nopenssl genrsa -out myuser.key 2048\nopenssl req -new -key myuser.key \\\n -subj \"/C=DK/ST=DK/O=''/CN=myuser\" \\\n -out myuser.csr\n\n# Extract the certificate signing request\nREQ=$(cat myuser.csr | base64 | tr -d \"\\n\")\n\n# Create a Kubernetes CSR object\n# and approve it\ncat < myuser.crt\n\n# Create the user role role (with the appropriate access levels)\n# and bind the user to the role\nkubectl create role myuser --verb=\"*\" --namespace pr-env \\\n --resource=pod \\\n --resource=service \\\n --resource=configmap \\\n --resource=secret \\\n --resource=ingress \\\n --resource=daemonset \\\n --resource=replicaset \\\n --resource=deployment \\\n --resource=job \nkubectl create rolebinding myuser-binding --role=myuser --user=myuser\n\n# Cleanup the Kubernetes CSR\nkubectl delete csr myuser\n\n# Extract config locally, into your config file\n# Location: ~/.kube/config\nkubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true\nkubectl config set-context myuser --cluster=kubernetes --user=myuser\nkubectl config use-context myuser \n```\n\n## The test\nAs soon as you have generate the new context, and it has been activated locally, run a test command, such as:\n\n```shell\nkubectl get nodes\nkubectl get pods\nkubectl get services\n```\n\n**Note:** the script is written in a very bare-bones and simple way, so that it's easy to understand and modify for you own purposes. \n\n### Read more\n- [The official kubernetes documentation on certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user) " }, { "title": "Getting started with K8S: How to get a cluster", "date": "2021-04-24T00:00:00.000Z", "slug": "/posts/how-to-get-a-kubernetes-cluster/index", "content": "# Getting started with K8S: How to get a cluster\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/how-to-get-a-kubernetes-cluster/index)_\n\n\n\n![Cover](https://dotmethod.me/posts/how-to-get-a-kubernetes-cluster/index/cover.jpg)\n\n
\nPhoto by Pierre Bamin on Unsplash\n
\n\n\n\"How do I get started with Kubernetes? Is it difficult? I've read it's difficult. I skimmed through some articles and I was overwhelmed.\"\n\nAlright. Alright... Fair enough. I was also confused at first. But here's what it boils down to: think of it like any other large open-source project - you have options. Many, many options; maybe too many. The same way you have thousands of linux distributions, kubernetes also has many distributions. And some of them are 100% accessible to anyone with a bit of technical know-how. \n\n\n## K8S Distros worth knowing about\n- [microk8s](https://microk8s.io/) - made by Canonical (the Ubuntu people)\n- **[k3s](https://k3s.io/) - made by the people who made Rancher**\n\nAnd honestly, that's enough. These two distros are easy to get started with, they have nice documentation, and they won't overwhelm while getting started. I choose K3S as my favorite, although it's a close call.\n\n## K3S Installation\n\nHow do you get going with K3S? Well, it's pretty simple. You'll first need a server (VPS) with ssh access; I don't recommend trying this directly on your machine (although you totally could). \n\nThe installation is really simple, and the command is at the top of on their [landing page](https://k3s.io/):\n\n```shell\ncurl -sfL https://get.k3s.io | sh -\n```\n\nYou then wait for a few seconds, for the \"cluster\" to get up and running. Monitor the progress with this command:\n\n```shell\nk3s kubectl get node\n```\n\n### Access the cluster from the outside\n\nAt this stage the kubernetes node is running and ready for action. How do you connect to it? \n\nGenerally speaking, you use the locally installed `kubectl` cli for remotely managing any kubernetes cluster. You can download and install it from the official [k8s website](https://kubernetes.io/docs/tasks/tools/).\n\nWith that done, you need some sort of config to get you authenticated with your new cluster. This is handled by kubectl, and you can get your hands on this configuration from inside your node. Taken straight from the [k3s docs](https://rancher.com/docs/k3s/latest/en/cluster-access/):\n\n> *Copy /etc/rancher/k3s/k3s.yaml on your machine located outside the cluster as ~/.kube/config. Then replace “localhost” with the IP or name of your K3s server. kubectl can now manage your K3s cluster.*\n\nIn other words, you just need to copy the config file, from the server, to your computer. Afterwards, you'll want to test it out, and you can do that locally with the following commands:\n\n```shell\nkubectl get nodes\nkubectl get pods --all-namespaces\n```\n\nIf you get some output out of it, without errors, congratulations!\n\n## Conclusion\n\nAnd. That's. It. You have kubernetes up and running. \n\nSure, it's a single node cluster. But don't worry about that yet; not in the beginning. When the time comes, you can easily connect more K8S nodes and get your hands on more compute power, but you'll search for that when you need it.\n" }, { "title": "Search engine: Meilisearch deployment for k8s", "date": "2022-01-25T00:00:00.000Z", "slug": "/posts/2022/01/25-meilisearch-k8s/index", "content": "# Search engine: Meilisearch deployment for k8s\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/2022/01/25-meilisearch-k8s/index)_\n\n\n\nElastic search, TypeSense, Algolia, Meilisearch - a few search engine technologies which you might have seen out there; ElasticSearch currently being the largest of them. \n\nMy favorite of them: Meilisearch. No surprise here, given the title of the article. So here's how I deployed it on Kubernetes.\n\n\n```yaml\n# Source: meilisearch/templates/serviceaccount.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n namespace: meilisearch\n name: meilisearch\n labels:\n app.kubernetes.io/name: meilisearch\n app.kubernetes.io/instance: meilisearch\n---\n# Source: meilisearch/templates/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n namespace: meilisearch\n name: meilisearch-environment\n labels:\n app.kubernetes.io/name: meilisearch\n app.kubernetes.io/instance: meilisearch\ndata:\n MEILI_ENV: \"development\"\n MEILI_NO_ANALYTICS: \"true\"\n MEILI_HTTP_PAYLOAD_SIZE_LIMIT: \"10Gb\"\n MEILI_DB_PATH: \"/data\"\n---\n# Source: meilisearch/templates/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: meilisearch\n namespace: meilisearch\nspec:\n selector:\n app: meilisearch\n ports:\n - port: 7700\n targetPort: 7700\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n namespace: meilisearch\n name: meilisearchdb\nspec:\n selector:\n matchLabels:\n app: meilisearch\n serviceName: \"meilisearch\"\n replicas: 1\n template:\n metadata:\n labels:\n app: meilisearch\n spec:\n securityContext:\n fsGroup: 1000\n serviceAccountName: meilisearch\n containers:\n - name: meilisearch\n image: getmeili/meilisearch:v0.24.0\n resources:\n requests:\n memory: 1Gi\n cpu: \"1\"\n limits:\n memory: 2Gi\n cpu: \"2\"\n envFrom:\n - configMapRef:\n name: meilisearch-environment\n ports:\n - name: http\n containerPort: 7700\n protocol: TCP\n volumeMounts:\n - mountPath: \"/data\"\n name: mpvc\n livenessProbe:\n httpGet:\n path: /health\n port: http\n initialDelaySeconds: 30\n readinessProbe:\n httpGet:\n path: /health\n port: http\n initialDelaySeconds: 30\n volumeClaimTemplates:\n - metadata:\n name: mpvc\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 30Gi\n storageClassName: do-block-storage\n\n```" }, { "title": "Just enough architecture", "date": "2022-07-22T00:00:00.000Z", "slug": "/posts/2022/07/22-just-enough-architecture/index", "content": "# Just enough architecture\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/2022/07/22-just-enough-architecture/index)_\n\n\n\n\"Just enough architecture\" is a concept that emphasizes the importance of pragmatically designing IT systems to meet the specific needs of the business. This approach advocates for only architecting an IT system as far as it is needed, rather than over-architecting and adding unnecessary complexity.\n\nOne of the key benefits of this approach is that it allows teams to be more agile and responsive to changing needs. By not over-architecting, teams can quickly and easily make changes to their systems as needed, without being bogged down by unnecessary complexity. Additionally, by only architecting an IT system as far as needed, teams can save time and resources that would otherwise be spent on unnecessary design and development.\n\nAnother benefit of this approach is that it allows teams to deliver value to their customers more quickly. By not over-architecting, teams can focus on delivering functionality that is immediately valuable to customers, rather than spending time on unnecessary design and development.\n\nHowever, it's important to note that \"just enough architecture\" does not mean that teams should skimp on the design of their IT systems. It's still important (and sometimes essential) to have a well-designed system that meets the specific needs of the business. It means finding the balance between having a flexible system that can evolve over time, and not over-engineering the system.\n\nIn summary, \"just enough architecture\" is a concept that emphasizes the importance of pragmatically designing IT systems to meet the specific needs of the business. By only architecting an IT system as far as needed, teams can be more agile, responsive to changing needs and deliver value to their customers more quickly. " }, { "title": "The pragmatic rules of thumb of the 12-factor app", "date": "2023-01-29T00:00:00.000Z", "slug": "/posts/2023/01/29-12-factor-app", "content": "# The pragmatic rules of thumb of the 12-factor app\n\n_Originally posted on [dotmethod.me](https://dotmethod.me/posts/2023/01/29-12-factor-app)_\n\n\n\nIn principle, the 12-factor app is a methodology for building software-as-a-service (SaaS) apps that are optimized for the cloud. This methodology, first introduced by Heroku, defines a set of principles that developers should follow to ensure that their applications are easy to deploy, scale, and maintain. \n\nPragmatically speaking, these 12 principles are hard lessons learned through years of experience building and operating apps in the cloud by talented engineers. And they're damn good lessons.\n\n**They go as follows:**\n\n- Codebase: One codebase tracked in revision control, many deploys.\n- Dependencies: Explicitly declare and isolate dependencies.\n- Config: Store config in the environment.\n- Backing services: Treat backing services as attached resources.\n- Build, release, run: Strictly separate build and run stages.\n- Processes: Execute the app as one or more stateless processes.\n- Port binding: Export services via port binding.\n- Concurrency: Scale out via the process model.\n- Disposability: Maximize robustness with fast startup and graceful shutdown.\n- Dev/prod parity: Keep development, staging, and production as similar as possible.\n- Logs: Treat logs as event streams.\n- Admin processes: Run admin/management tasks as one-off processes.\n \n\nBy following these principles, developers can maximize the chances that their applications will be easy to deploy, scale, and maintain. For example, by storing config in the environment and treating backing services as attached resources, developers can ensure that their applications will be portable and easy to run in different environments. Additionally, by strictly separating the build and run stages, developers can ensure that their applications will be easy to test and deploy.\n\nIf you chose not to pay attention to these principles, the only thing that will happen is that you'll come to the exact same conclusions by yourself, but you'll have to learn them the hard way. My advice is to just keep them in the back of your mind, and if you're ever in doubt, just come back and give them another glance - it'll likely save you loads of time and frustration." } ]