After fixing minor issues of the previous implementation, I was able to get the command working as expected. Additionally, I wrote a bunch of unit and integration tests. Currently, I'm waiting for feedback.
As mentioned yesterday, my time available for coding is very limited - currently.
That's why I continued my work on Jinja2 as well as simplified the
azure-pipelines.yml of my personal website.
Today, I hadn't much time to code. That's why I gave myself a quick introduction into Jinja2, which I will continue in the upcoming days. Furthermore, I updated my #100DaysOfCode blog (at least the last couple of days).
Finally, I completed the Docker CI pipeline for my
The solution to my tagging issue was to use a predefined variable to get the latest tag instead of creating my own.
I'm thinking about writing an article about it as other people might have similar thoughts and issues setting up such a (comparativly simple) pipeline.
Update: At the end of the day I watched a live stream from Anthony Sottile, where he fixed some issues in pyflakes. During the live stream he encountered a typo in one of the tests. I cloned the repository, fixed it and submitted a PR for it, which got merged during the live stream!
Today, I was able to get the tests pass on Azure Pipelines and to only trigger a full build if a new tag was added via git. Furthermore, a build should be triggered if any PR was opened, but without pushing the built image to Docker Hub. However, at the end of the day one issue remained: If a new tag was created via git and the full build triggered, the pipeline failed with some obscure error message.
There exist a few Docker images I'm maintaining.
However, none of them is actually tested using automated tests.
As I'm a big fan of continuous integration (CI) I decided to give it a try and to set up a CI pipeline for one of my Docker images:
(You can find the source code on GitHub)
The primary idea was to set it up as follows: First, build the image, then test the image, give it a proper tag and push it to Docker Hub if all tests pass.
Furthermore, the tests should run on the three major operating systems out there: Linux, MacOS, and Windows.
At the end of the day the rough structure was built, but the actual tests were still failing in the pipeline, even though it worked on my local machine.
The past three days were tough ones. There was so much going on at university, that it was hard for me to work on my side projects. However, I managed to continue Michael Kennedys course (at least a bit) and to fix minor style issues on my website. Additionally, I had a closer look at postfix as a mail server. This might be a possible solution for an automation issue I currently have with another project of mine.
Again, I was travelling through Germany with limited access to anything. However, I publicly committed to the #100DaysOfCode-challenge, so I wanted to do something for it - at least an hour! Eventually I managed to finish the setup part of Michael Kennedys course and thought about possible use cases, where I could apply my knowledge to.
Today, I stumbled across Michael Kennedys Building data-driven web apps with Flask and SQLAlchemy course at TalkPython Training. It sounded pretty interesting, so I gave it a try. I really like the concept of building data-driven web applications, so I'll build some smaller ones in the near future - just need to find good use cases.
I finished the Async Course at TalkPython Training. It's a pretty good one and I can highly recommend it! Furthermore, I continued to set up a proper CI/CD pipeline to auto-deploy changes made to my portfolio repository to my server. However, I wasn't able to do so - at least not without creating major security vulnerabilities. As the following days and weeks will be pretty tough, I'm going to focus more on the theoretical than on the practical parts of programming.
Today I traveled through Germany, so I hadn't much time to focus on my learning. However, I made it to explore Cython even further and implemented smaller things.
I wanted to increase the performance of my website even further, so I had a look at Quart and Uvicorn as ASGI server. Unfortunately, it's not able to seamlessly migrate from Flask to Quart if you are using extensions like flask-compress and flask-caching as I do.
Additionally, I had a look at Cython, which seems to be pretty nice if you know at least the basics of the programming language C. Furthermore, I started to implement my own CI/CD pipeline to auto-deploy any changes made to my master branch on GitHub to my personal server. I stumble upon certain obstacles and hope to resolve them in the following days.
Today I wanted to get my heads dirty and re-implement certain parts of my personal portfolio page to make use of AsyncIO and/or threads to increase its performance. Therefore, I tried to use async techniques to scrape the GitHub data for my landing page asynchronously. Unfortunately, I wasn't able to do that as the module architecture didn't allow me to do so. Subsequently, I switched over to threads and was able to scrape the GitHub data in parallel and increase the performance of the landing page! Here's a quick comparison:
Without Threading With Threading ----------------- -------------- Worst Case: 1.9 Seconds --> 0.7 Seconds Best Case: 0.7 Seconds --> 0.4 Seconds
Furthermore, I dived deeper into the thread safety topic and how to ensure it in Python as well as fixed a few styling issues concerning my website.
Update: As I was highly motivated in the evening, I also completed the
multiprocessing unit of the async course as well as how to unify the different APIs using pool executors.
await are great, but not always applicable.
That's why I had a look at threading in Python, learned about the similarities with AsyncIO and where the limits are (GIL).
Now that I know the similarities and dissimilarities of both approaches, I need to have a closer look at my portfolio page and whether it makes sense to increase performance by using AsyncIO or the
After the introduction into Python's asynchronous capabilities yesterday, I had a closer look at the
await keywords introduced in Python 3.5 (PEP 492).
I found it pretty insane how easy it is to apply asynchronous behaviour to Python functions!
That's why I subsequently had a look at my portfolio page and tried to identify modules and functions, which I could make async to increase the pages performance.
In the folowing days I'll try to implement the identified areas.
Today, Sep 21st I started the fourth round of the #100DaysOfCode-challenge. Besides removing the thumbnail from my landing page due to loading issues and adjusting some things in the blog, I introduced myself to asynchronous programming in Python! Therefore, I started the Async Techniques and Examples in Python course at Talk Python Training to get a better overview about the topic. I'm looking forward to dive deeper into the whole async topic in the next couple of days!