Logging is an essential part of software development.
Many people still stick to the habit of using Python's built-in print()
function as an alternative to using a proper logging library.
However, simple print()
function calls cannot replace a good logger.
Some people argue that using a logger instead of the print()
function is too much overhead.
Continue reading and I will present loguru to you: A logging library, which takes away the pain of setting up a proper logger. At the same time, you will see that despite its simplicity, loguru is still a very powerful and customisable library.
The code used in this article can be found on GitHub. It was written for Python 3.9 (CPython).
To get started with loguru, you need to install it first. Fortunately, this is as easy as:
$ python -m pip install loguru
Once it is installed, you can create a new logger instance in your Python code and start logging:
# hello.py
from loguru import logger
logger.info("Hello from loguru!")
This little snippet already teaches us a lot about the capabilities of loguru. But before diving deeper into it, let us first execute the script and see the produced output:
$ python hello.py
2021-07-21 12:16:59.394 | INFO | __main__:<module>:4 - Hello from loguru!
First, the log message is emitted to sys.stderr
.
This is the default behaviour loggers should have as defined in the POSIX standard.
Secondly, the log message has the following structure:
<datetime> | <log level> | <file location>:<scope>:<line number> - <message>
Note, that the file location is __main__
because we executed hello.py directly.
Furthermore, the scope is set to <module>
as the log message is located in the module's scope and not inside a function or class.
If you execute the snippet on your local machine, you will also recognise that loguru innately supports coloured log messages.
The default behaviour - importing loguru.logger
and using it as an instance of Python's standard library logging.Logger
class - is already sufficient for smaller projects and sample code.
If you are looking for customisation capabilities, follow along and explore loguru's most powerful method.
If you have ever worked with Python's logging module, you may know that you can use custom handlers, formatters, and filters to personalise the output of the logger.
When using loguru, there is a single function that you will need to know: add()
.
As the loguru documentation states: One function to rule them all.
In essence, the add()
function is used to register sinks.
Sinks are responsible for managing log messages.
They can be contextualised with the record dict.
The record dict stores all contextual information associated with a given log message.
We will see how this can be used in a second.
In the getting started section, we already learnt something about the default format of a log message. Fortunately, we do not need to stick to this format and can adjust it to our needs. To do so, we register a new sink and specify the format for it:
# customise_format.py
import sys
from loguru import logger
logger.add(sys.stderr, format="{time} {level} {message}")
logger.info("Hello from loguru!")
Notice, that the format string looks similar to Python's f-strings.
However, no f
is present before the string.
loguru does its own string interpolation and replaces the given placeholders (in the example at hand {time}
, {level}
, and {message}
) with the entries in the record dict.
An overview of all available record dict items can be found here.
Executing the script results in:
$ python customise_format.py
2021-07-22 16:55:18.623 | INFO | __main__:<module>:6 - Hello from loguru!
2021-07-22T16:55:18.623722+0200 INFO Hello from loguru
Two interesting things are happening here.
First, we created only one log message in our code but two were sent to sys.stderr
.
Secondly, the first message is coloured, whereas the one with our custom format is not.
To fix the first issue, we need to unregister or remove the default sink.
We can do that by calling loguru's remove()
function:
# previous code in customise_format.py
logger.remove(0)
# subsequent code in customise_format.py
The add()
function returns the handler id of the registered sink.
This id needs to be supplied to remove()
to remove it.
If None
is supplied (default), all handlers are removed.
It is guaranteed that the pre-configured handler has the index 0.
Consequently, we passed 0 to the remove()
function to unregister it.
The second issue can be fixed by supplying colorize=True
when adding a new sink and using markup tags [1] in the format string.
When both fixes come together, our script looks like this:
# customise_format.py
import sys
from loguru import logger
logger.remove(0)
logger.add(sys.stderr, format="<red>{time}</red> <green>{level}</green> {message}", colorize=True)
logger.info("Hello from loguru!")
Executing the script again results in:
$ python customise_format.py
2021-07-22T17:04:51.225012+0200 INFO Hello from loguru
The colours you see in this article are not the same as in your terminal. Make sure to run the code on your machine to see the correct colours.
While adjusting the format of log messages belongs to the standard repertoire of a logging library, loguru comes with a few file logging features.
But let us start at the beginning.
Suppose you want to keep the pre-configured logger emitting logs to sys.stderr
and add another sink writing the log messages to a specified file.
To this end, we register a new sink:
# file_logger.py
from loguru import logger
logger.add("normal_file.log")
logger.info("Hello from loguru!")
Executing the script does not only emit the log message to sys.stderr
, but creates a new file in your working directory called normal_file.log, which contains the same log message as written to sys.stderr
.
So far so good.
Let's add three more sinks: One for retention, rotation, and compression.
# file_logger.py
from loguru import logger
logger.add("normal_file.log")
logger.add("retention_file.log", retention="5 days")
logger.add("rotation_file.log", rotation="1 MB")
logger.add("compress_file.log", compression="zip")
logger.info("Hello from loguru!")
Executing the script again prints the log message. Additionally, we find four different log files in our working directory:
When dealing with applications, which are running over a long period, or if you want to prevent too big log files, utilising retention, rotation, and compress can be real game-changers!
A list of file logging options can be obtained from loguru's documentation.
The third customisation topic I want to cover is structured logging.
You may come across situations where you want to extend the default record dict entries to add even more context to your log messages.
Suppose you have a certain piece of code where you want to log the IP address which is being used, too.
To do so, we use the {extra[ip]}
placeholder in the format string.
To add a value to it, we utilise loguru's bind()
function to create a logger with that information:
# structured_logging.py
import sys
from loguru import logger
logger.info("Hello from loguru!")
logger.remove(0)
user_ip = "127.0.0.1"
logger.add(sys.stderr, format="{time} | {level} | {extra[ip]} | {message}")
ip_logger = logger.bind(ip=user_ip)
ip_logger.info("Hello from loguru's IP logger!")
When executing the script at hand, you will see two log messages in your terminal:
$ python structured_logging.py
2021-07-22 18:59:00.576 | INFO | __main__:<module>:5 - Hello from loguru!
2021-07-22T18:59:00.587146+0200 | INFO | 127.0.0.1 | Hello from loguru's IP logger
The first one does not provide information about the user's ip address whereas the second does.
However, you cannot use the logger
instance to log messages at this point.
If you try doing it, a KeyError
is raised, because ip
is not defined in its context.
Until now, we had a look at getting started with loguru and three customisation or enhancement sections. But loguru is much more powerful! So let's have a look at a few more cool features.
Running code in threads is nice to boost the performance of your code, but sometimes your program crashes due to occurring exceptions.
To log these exceptions as well, you can apply the @catch()
decorator to a given function:
# catch_exceptions.py
from loguru import logger
@logger.catch
def division(divident: int, divisor: int) -> float:
return divident / divisor
print(division(2, 1))
print(division(2, 0))
Executing the script gives you this prettified log message:
$ python catch_exceptions.py
2.0
2021-07-24 10:15:45.418 | ERROR | __main__:<module>:11 - An error has been caught in function '<module>', process 'MainProcess' (6874), thread 'MainThread' (140382092546688):
Traceback (most recent call last):
> File /home/florian/workspace/python/loguru-article-snippets/catch_exceptions.py, line 11, in <module>
print(division(2, 0))
└ <function division at 0x7fad3fa93820>
File /home/florian/workspace/python/loguru-article-snippets/catch_exceptions.py, line 7, in division
return divident / divisor
│ └ 0
└ 2
ZeroDivisionError: division by zero
None
I can highly recommend that you run the code on your system, too, so you will see the colourised and well-structured log message displayed above.
Additionally, all sinks are thread-safe by default.
Making them multiprocess- and asynchronous-safe is as easy as adding enqueue=True
when registering the sink.
But that's not all.
loguru is completely compatible with Python's logging module from the standard library.
Checkout loguru's documentation for a guide on how to fully migrate from the logging module to loguru [2].
Furthermore, loguru enables you to create custom log levels and does even support lazy evaluation of expensive functions [3].
As you can see, loguru is a really powerful logging library, although it hasn't reached its stable version yet (still a 0.x release). Before concluding the article, we need to have a look at another essential topic: Testing.
If you are familiar with the third-party testing library pytest, you would probably try to utilise its caplog fixture to compare loguru's log messages with your expected ones. However, the caplog fixture is tied to Python's logging module from the standard library, which means that log messages emitted by loguru are not captured.
To do so, we need to monkeypatch the caplog fixture and register a sink propagating the log messages to the logging module. Simply put, add the following custom fixture to your conftest.py [4]:
import logging
import pytest
from _pytest.logging import caplog as _caplog
from loguru import logger
@pytest.fixture
def caplog(_caplog):
class PropogateHandler(logging.Handler):
def emit(self, record):
logging.getLogger(record.name).handle(record)
handler_id = logger.add(PropogateHandler(), format="{message} {extra}")
yield _caplog
logger.remove(handler_id)
Congratulations, you have made it through the article! In this article, you learnt what loguru is and how to get started using it. You had a look at different customisation approaches and had a glimpse at how powerful loguru is through its many logging features.
I hope you enjoyed reading the article. Make sure to share it with your friends and colleagues. If you haven't already, follow me on Twitter, where I am @DahlitzF and to subscribe to my newsletter, so you won't miss any future article. Stay curious and keep coding!