Today, I published a new article explaining how to modularise your shell config.
Additionally, I added a permanent redirect to the docker-compose.yml
of my website.
Subsequently, I continued my work on the chess ai paper describing certain parts of the simplified evaluation function we are implementing later.
As I'm currently hosting my own Nextcloud and Rocket.Chat instances, I decided to add both to my contact page as a method to contact and exchange content with me. Furthermore, I finished the Modelling data with SQLAlchemy classes section of the Building data-driven web apps with Flask and SQLAlchemy of TalkPython Training.
I worked on a new blog entry and revisited some object-oriented programming principles.
As part of my university studies, I need to write a paper about implementing a chess ai and actually implementing it afterwards. Therefore, I made a research about evaluation functions I could potentially use.
Today, I spent most of the time (re-)configuring self-hosted services. Additionally, I started implementing the database model for my URL shortener.
Implemented the generation of short URLs for my URL shortener service as well as the redirection of short URLs to their intended destination.
Finished the "Bootstrap and frontend CSS frameworks" and "Adding our design" sections of @TalkPython's "Building data-driven web apps with Flask and SQLAlchemy" course. The principles and concepts I learned during these sections enables me to further improve my URL shortener frontend.
Today, I implemented the basic frontend for my Url shortener. Therefore, I had a closer look at bootstrap - awesome! Furthermore, I set up my own Jenkins instance, which is currently missing some more configuration.
I started implementing my own URL shortener. Therefore, I already implemented the encoding of the given URLs. Additionally, I worked on a CI-pytest-issue occuring only on azure-pipelines.
Recaptured object-oriented programming principles and software architecture methods and principles.
Today, there wasn't much time left for the #100DaysOfCode-challenge.
I made smaller contributions to my side-projects and open source projects.
Additionally, I fixed my less
setup.
I created a repository to store useful unix commands, aliases and functions. Furthermore, I created a cookiecutter based documentation tool, where I can save my notes about books and papers I've read.
Long story short: Add docstrings to my quart-compress package to help people understand it better. Furthermore, I upgraded my simple text summarized I've implemented a while ago to the latest Python versions and by doing so, removed some security vulnerabilities occuring in used dependencies.
Today, I implemented a simple but useful Slack integration, which allows you to automatically react on certain messages based on their content.
For example if I want to always react with a :wink:
emoji on messages containing something like "Hey everyone", it's a perfect situation for using this little script.
Besides the implementation of the Slack integration, I did a bunch of cloud configurations.
Continued my work on the script for streaming and saving files based on a .m3u8
link.
Furthermore, I discovered a bug in my postfix setup and spent way too much time solving it.
At the end of the day I prepared a bunch of things to be productive tomorrow.
I built Python 3.9 from source and implemented a small bash function upgrading it to the latest version (based on GitHub master branch).
Additionally, I configured a postfix to be able to receive email notifications from my Nextcloud.
Subsequently, I worked on a script finding .m3u8
URLs belonging to a given website and downloading the whole stream.
As I forgot to update my blog the past couple of days, I did it today. By doing this, I discovered an issue with my new website structure, which I had to fix. Furthermore, I spent a decent amount of time setting up my own Nextcloud and configuring it properly.
Finally, I fixed my OBS setup on Debian. As it was a display server protocol issue (Wayland is not supported, so X11 had to be used), I wrote an article about it and published it to Medium. Furthermore, I discovered bandit a few days ago and after using it for a while, I added it as a pre-commit hook to quart-compress. In addition to that, I found a pretty unknown project on GitHub and contributed to it by fixing a link error in the documentation.
I digged deeper into pytest fixtures and changed the scope of some of mine to increase the performance of each test run (if applicable). Furthermore, I fixed a few issues arised yesterday in wily. However, this produced new issues. Additionally, I finished the routing part of the data-driven web apps course.
I had the idea to record some of my coding sessions. Therefore, I wanted to set up OBS Studio. However, I was not able to get the Screen Capture functionality to work. Every other functionalities was up and running as expected. Let's see if I can fix it in one of the upcomming days. Furthermore, I continued my work on the newly wily rank command and increased it's test coverage. However, some of the tests now fail under Windows.
Today, I finally solved the last two issues concerning my quart-compress package. The package now fully supports flask-caching! Furthermore, I was able to increase the test coverage to 96%.
Watching Michael Kennedy's course about data-driven web applications with flask showed me, that it's good to structure your flask projects in a way making it easier for you to discover bugs and to migrate your code if necessary. That's why I restructured my whole portfolio code base today!
Quart officially supports the flask-caching extension. I'm currently using flask-compress and flask-caching. It's important for me, that quart-compress and flask-caching are working well together, so I can seamlessly migrate from flask to quart. I set up a bunch of tests to test the behaviour of both. I discovered a few issues and will try to solve them in the upcoming days.
To enable the users of quart-compress to use the full power of their IDEs, I added type annotations to all functions and methods of quart-compress. Subsequently, I set up mypy on my local machien as well as a pre-commit hook to automatically check those annotations.
Today, I set up pre-commit for the first time and added it to the quart-compress project.
Furthermore, I added code coverage testing to it and rewrote the tests for it.
The tests were written in Python's unittest
format, but as I'm a fan of pytest I rewrote them to meet the pytest conventions.
pyproject.toml
and Data Science¶Yesterday, I finished the basic implementation of the quart-compress package.
However, I used plain old requirements.txt
and setup.py
.
As I saw pyproject.toml
files a few times in the wild, I wanted to give it a try and created one for the quart-compress package.
Have a look at PEP 518 to learn more about it.
In addition to that, I had a look at various data science techniques and how to deal with large unprocessed sets of data.
A few days ago I wanted to migrate my personal portfolio page from flask to quart to make use of an ASGI server.
However, a few flask extensions were not compatible with quart namely flask-compress.
I decided to create my own quart-compress package compressing quart responses using Python's built-in gzip
module.
You find the source code on GitHub.
As a preparation for my upcoming exam about the work I've done in my practical phases, I revised my work on an image recognition project. Furthermore, I configured tmux for my Debian machine to fit my needs. Additionally, I finished the Jinja2 section of the data-driven web applications course at Talk Python Training I started a few days ago.
After all of the years of using Windows I decided to install Debian on my machine. Basically, I need half a day to configure everything properly so it fits my needs. Furthermore, I revisited my previous work on communication encryption using TLS and continued my learnings about Jinja2.
Today, I had a look at the awesom retox project, which enables you to build tox environments in parallel. I encountered a reported issue, where retox fails displaying the processes if you resize the terminal window. I digged deeper into the code base to find a way to solve the issue, but wasn't successful yet.
After fixing minor issues of the previous implementation, I was able to get the command working as expected. Additionally, I wrote a bunch of unit and integration tests. Currently, I'm waiting for feedback.
I worked on wily's new rank command proposed in issue #13. Therefore, I set up the branch someone else already worked on locally and fixed some minor code issues. The PR can be found here.
As mentioned yesterday, my time available for coding is very limited - currently.
That's why I continued my work on Jinja2 as well as simplified the azure-pipelines.yml
of my personal website.
Today, I hadn't much time to code. That's why I gave myself a quick introduction into Jinja2, which I will continue in the upcoming days. Furthermore, I updated my #100DaysOfCode blog (at least the last couple of days).
Finally, I completed the Docker CI pipeline for my docker-calibre
image.
The solution to my tagging issue was to use a predefined variable to get the latest tag instead of creating my own.
I'm thinking about writing an article about it as other people might have similar thoughts and issues setting up such a (comparativly simple) pipeline.
Update: At the end of the day I watched a live stream from Anthony Sottile, where he fixed some issues in pyflakes. During the live stream he encountered a typo in one of the tests. I cloned the repository, fixed it and submitted a PR for it, which got merged during the live stream!
Today, I was able to get the tests pass on Azure Pipelines and to only trigger a full build if a new tag was added via git. Furthermore, a build should be triggered if any PR was opened, but without pushing the built image to Docker Hub. However, at the end of the day one issue remained: If a new tag was created via git and the full build triggered, the pipeline failed with some obscure error message.
There exist a few Docker images I'm maintaining.
However, none of them is actually tested using automated tests.
As I'm a big fan of continuous integration (CI) I decided to give it a try and to set up a CI pipeline for one of my Docker images: docker-calibre
(You can find the source code on GitHub)
The primary idea was to set it up as follows: First, build the image, then test the image, give it a proper tag and push it to Docker Hub if all tests pass.
Furthermore, the tests should run on the three major operating systems out there: Linux, MacOS, and Windows.
At the end of the day the rough structure was built, but the actual tests were still failing in the pipeline, even though it worked on my local machine.
The past three days were tough ones. There was so much going on at university, that it was hard for me to work on my side projects. However, I managed to continue Michael Kennedys course (at least a bit) and to fix minor style issues on my website. Additionally, I had a closer look at postfix as a mail server. This might be a possible solution for an automation issue I currently have with another project of mine.
Again, I was travelling through Germany with limited access to anything. However, I publicly committed to the #100DaysOfCode-challenge, so I wanted to do something for it - at least an hour! Eventually I managed to finish the setup part of Michael Kennedys course and thought about possible use cases, where I could apply my knowledge to.
Today, I stumbled across Michael Kennedys Building data-driven web apps with Flask and SQLAlchemy course at TalkPython Training. It sounded pretty interesting, so I gave it a try. I really like the concept of building data-driven web applications, so I'll build some smaller ones in the near future - just need to find good use cases.
I finished the Async Course at TalkPython Training. It's a pretty good one and I can highly recommend it! Furthermore, I continued to set up a proper CI/CD pipeline to auto-deploy changes made to my portfolio repository to my server. However, I wasn't able to do so - at least not without creating major security vulnerabilities. As the following days and weeks will be pretty tough, I'm going to focus more on the theoretical than on the practical parts of programming.
Today I traveled through Germany, so I hadn't much time to focus on my learning. However, I made it to explore Cython even further and implemented smaller things.
I wanted to increase the performance of my website even further, so I had a look at Quart and Uvicorn as ASGI server. Unfortunately, it's not able to seamlessly migrate from Flask to Quart if you are using extensions like flask-compress and flask-caching as I do.
Additionally, I had a look at Cython, which seems to be pretty nice if you know at least the basics of the programming language C. Furthermore, I started to implement my own CI/CD pipeline to auto-deploy any changes made to my master branch on GitHub to my personal server. I stumble upon certain obstacles and hope to resolve them in the following days.
Today I wanted to get my heads dirty and re-implement certain parts of my personal portfolio page to make use of AsyncIO and/or threads to increase its performance. Therefore, I tried to use async techniques to scrape the GitHub data for my landing page asynchronously. Unfortunately, I wasn't able to do that as the module architecture didn't allow me to do so. Subsequently, I switched over to threads and was able to scrape the GitHub data in parallel and increase the performance of the landing page! Here's a quick comparison:
Without Threading With Threading
----------------- --------------
Worst Case: 1.9 Seconds --> 0.7 Seconds
Best Case: 0.7 Seconds --> 0.4 Seconds
Furthermore, I dived deeper into the thread safety topic and how to ensure it in Python as well as fixed a few styling issues concerning my website.
Update: As I was highly motivated in the evening, I also completed the
multiprocessing
unit of the async course as well as how to unify the different APIs using pool executors.
async
and await
are great, but not always applicable.
That's why I had a look at threading in Python, learned about the similarities with AsyncIO and where the limits are (GIL).
Now that I know the similarities and dissimilarities of both approaches, I need to have a closer look at my portfolio page and whether it makes sense to increase performance by using AsyncIO or the threading
module.
async
and await
¶After the introduction into Python's asynchronous capabilities yesterday, I had a closer look at the async
and await
keywords introduced in Python 3.5 (PEP 492).
I found it pretty insane how easy it is to apply asynchronous behaviour to Python functions!
That's why I subsequently had a look at my portfolio page and tried to identify modules and functions, which I could make async to increase the pages performance.
In the folowing days I'll try to implement the identified areas.
Today, Sep 21st I started the fourth round of the #100DaysOfCode-challenge. Besides removing the thumbnail from my landing page due to loading issues and adjusting some things in the blog, I introduced myself to asynchronous programming in Python! Therefore, I started the Async Techniques and Examples in Python course at Talk Python Training to get a better overview about the topic. I'm looking forward to dive deeper into the whole async topic in the next couple of days!