Previously, we consider how to test print output using pytest. This is a continuation which considers how to test logged output. For printed output, we used pytest
‘s capsys
fixture to grab the stdout
(quite useful) and stderr
(unclear utility?). capsys
doesn’t provide a mechanism to tap into the logging stream. So, what should we use instead?
First, let’s begin by proving that capsys
doesn’t access logging (at least, the default configuration):
import logging
def main_logging():
logging.debug(f'Process started.')
start_time = time.time()
time.sleep(1)
end_time = time.time()
total_time = end_time - start_time
logging.critical(f'Done! Process took {total_time}.')
And our test:
def test_capture_logging(capsys):
main_logging()
cr = capsys.readouterr()
assert 'Done!' in cr.err
assert 'Done!' in cr.out
# fails:
# > assert 'Done!' in capsys.readouterr().err
# E AssertionError: assert 'Done!' in ''
# E + where '' = CaptureResult(out='', err='').err
Nothing appears in either err
or out
. To my understanding, by default, the logging
module writes to stderr
, but pytest
will only collect logging info using the caplog
fixture. Let’s see how that works.
def test_capture_logging(caplog):
main_logging()
assert 'Done!' in caplog.text
# passes: test_capture_logging PASSED [100%]
Okay, then let’s also test the presence of the ‘process started’ output as well:
@pytest.mark.xfail()
def test_capture_logging(caplog):
main_logging()
assert 'Done!' in caplog.text
assert 'Process started.' in caplog.text
# fails:
'''
AssertionError: assert 'Process started.' in 'CRITICAL root:capture_logging.py:13 Done! Process took 1.0004231929779053.\n'
'''
Oh, it’s not there. Where did ‘Process started.’ go? Note that the text has not been flushed (we need to manually call caplog.clear()
to do that). Any ideas? The key is the logging level: the first call is to logging.debug
, and the second is to logging.critical
. By default, only warning or higher get captured. To fix this, we just need to set the logging level to the minimum we want to get tested using caplog.set_level
.
def test_capture_logging(caplog):
main_logging() # default logging level: warn or above
assert 'Done!' in caplog.text # critical
assert 'Process started.' not in caplog.text # debug
caplog.clear() # reset output
assert caplog.text == '' # prove lack of logging output
caplog.set_level('NOTSET') # allow all logs: same as 'DEBUG'
# caplog.set_level('DEBUG') # we could also use 'DEBUG'
main_logging()
assert 'Done!' in caplog.text # critical
assert 'Process started.' in caplog.text # debug
# passes: test_capture_logging PASSED [100%]
First, we’re testing what we tried before but ensuring that the debug text is not in the logger. Next, in the lower half after resetting caplog
, we alter the level to 'NOTSET'
(or 'DEBUG'
). Note that we can also use logging.DEBUG
or logging.NOTSET
, etc., here.
pytest
and loguru
For logging, I default to using loguru
. As with many habits I’ve developed over the years, I don’t repeatedly check for alternatives, so if there’s a flashy new library that all the kids are using (so to speak), I’d love to hear about it.
Let’s create the loguru
version of our same function:
from loguru import logger
def main_loguru():
logger.debug(f'Process started.')
start_time = time.time()
time.sleep(1)
end_time = time.time()
total_time = end_time - start_time
logger.critical(f'Done! Process took {total_time}.')
Now, we can copy-paste our test_capture_logging
, but change main_logging
to main_loguru
and see if that works:
def test_capture_loguru(caplog):
main_loguru() # default logging level: warn or above
assert 'Done!' in caplog.text # critical
assert 'Process started.' not in caplog.text # debug
caplog.clear() # reset output
assert caplog.text == '' # prove lack of logging output
caplog.set_level('NOTSET') # allow all logs: same as 'DEBUG'
# caplog.set_level('DEBUG') # we could also use 'DEBUG'
main_loguru()
assert 'Done!' in caplog.text # critical
assert 'Process started.' in caplog.text # debug
# fails
'''
E AssertionError: assert 'Done!' in ''
'''
Unfortunately, not. loguru
operates differently. Instead, we’ll need to write a fixture to add the caplog
handler to loguru
. For example, something like:
@pytest.fixture
def loguru_caplog(caplog):
handler_id = logger.add(caplog.handler, format="{message}")
yield caplog
logger.remove(handler_id)
We’re going to get slightly different behaviour since loguru
defaults to outputing all messages (not just warn
and higher), so we’ll need to modify our test accordingly.
def test_capture_loguru(loguru_caplog):
main_loguru() # default logging level is not debug or above
assert 'Done!' in loguru_caplog.text # critical
assert 'Process started.' in loguru_caplog.text # debug
# passes: test_capture_loguru PASSED [100%]
Testing Log Files
Testing logging output, however, is very different from testing logging to files. If your application logs to a file, we need to be able to specify the file from the test — this allows us to be able to read and evaluate the result. For example, let’s optionally allow someone to specify a logfile to use for our main_loguru
:
def main_loguru(logfile: Path = None):
if logfile: # allow optional logfile
logger.add(logfile)
logger.debug(f'Process started.')
start_time = time.time()
time.sleep(1)
end_time = time.time()
total_time = end_time - start_time
logger.critical(f'Done! Process took {total_time}.')
Our prior test test_capture_loguru
will still work (since the default is not to create a log file). To test the output written to the file itself, we specify a logfile from the test, and then read that file once the function has completed. We’ll use the fixture tmp_path
to test this out.
def test_capture_loguru_file(tmp_path):
logfile = tmp_path / 'main_loguru.log'
main_loguru(logfile)
# test the output
with open(logfile) as fh:
lines = fh.readlines()
assert 'Process started.' in lines[0]
assert 'Done!' in lines[1]
# passes: test_capture_loguru_file PASSED [100%]
In the above function, we’re using the tmp_path
fixture to get a temporary directory, creating a log file in that directory, running the function to log to that file, and then testing the output in the path.
It’s worth noting that since loguru
doesn’t behave the same as the stdlib logging
module, when we add
a file, the output is still getting routed to the console, so the loguru_caplog
fixture will still work. We can prove this by extending our test_capture_loguru_file
function:
def test_capture_loguru_file(tmp_path, loguru_caplog):
logfile = tmp_path / 'main_loguru.log'
main_loguru(logfile)
# test the output
with open(logfile) as fh:
lines = fh.readlines()
assert 'Process started.' in lines[0]
assert 'Done!' in lines[1]
# test loguru_caplog too
assert 'Process started.' in loguru_caplog.text # debug
assert 'Done!' in loguru_caplog.text # critical
# passes: test_capture_loguru_file PASSED [100%]
Summary
We’ve managed to explore the pytest’s caplog
fixture which enables us to interact with and test logging output. Additionally, we created a pytest fixture customised for loguru
since caplog
doesn’t work for loguru
out of the box.