Test Logging in Django
Background
Logging is an important part of well-written code. We use logging to track events for purposes of error handling, auditing, security, and so on. I primarily use logging for purposes of auditing and security. But logging can also be a great tool with error handling to catch exceptions and allow for the continued execution of your code while still preserving the traceback for future review. For example:
try:
some block
except SomeException:
logger.exception("Something bad happened.")
With this approach, the code will continue without interruption but the exception will end up in our log so we can review later.
One area I like to use logging is where data is received, it looks pretty good, but there's something slightly amiss. The request is in good enough shape where we can continue with an operation, but an admin should peek at what happened just the same.
In a previous article, Flatpages In Django Part 2: Extending Flatpages, we used logging to alert when the request path and stored URL for a given FlagPage don't match. The difference between the two values is logged and we can later inspect the cause for the discrepancy.
Regardless of how we choose to use logging, we need to make sure it's working as expected. This is where testing comes in.
Testing Logging
Because I don't use a ton of logging, I needed a quick refresher on how to properly unit test logging. The docs point to simple tools to thoroughly test our logging. Specifically, Python provides for two context managers that allow us to easily test our logging and both are available through Django's TestCase:
- assertLogs(logger=None, level=None)
- assertNoLogs(logger=None, level=None)
Since these are context managers, as mentioned above, we'll set the tests up as "with" blocks that will contain the code where logging is present. Then, we'll further test the log messages to make sure they were constructed as expected. The test will look something like:
with self.assertNoLogs(logger="my_logger", level="WARNING":
...
# code that does NOT produce a log (at the 'warning' level)
...
...
# code that makes some change so the log will now occur
...
with self.assertLogs(logger="my_logger", level="WARNING" as cm:
...
# code that DOES produce a log (at the 'warning' level)
...
# make sure the log message is correct
self.assertEqual(cm.output, ["log-message"])
With the above approach, we're testing both sides of the conditional. This way we know that logs are produced as desired as well as no logs are produced that are not desired.
What We're Testing
The code in the Flatpages In Django Part 2: Extending Flatpages article will be tested for purposes of this demonstration and is as follows:
import logging
from django.contrib.flatpages.views import render_flatpage
from django.contrib.flatpages.models import FlatPage
from django.shortcuts import get_object_or_404
logger = logging.getLogger(__name__)
def flatpage_by_id(request, obj_id):
f = get_object_or_404(FlatPage, pk=obj_id)
path = request.get_full_path()
if f.url != path:
logger.warning(f"The flatpage with url '{f.url}' doesn't match the request path '{path}'.")
return render_flatpage(request, f)
In this view function, we retrieve an object by a provided primary key. We then compare the request path against the URL stored as a field to the object. If the values don't match, we log the discrepancy. Either way, execution continues.
The Test
The test itself will have three parts:
- A block that tests that no log is produced.
- A block that updates a FlatPage object so a log will be produced. And,
- A block that tests that a log IS produce and checks the message is what we expect.
All of this will be tested assuming that the FlatPage object already exists. We'll use one for example purposes with title, URL, and primary key of "Privacy Policy," "/privacy-policy/," and "1" respectively. This object will be accessible as "self.flatpage." In real life, I would import the objects into the testing environment using Django Fixtures.
Make Sure No Logs Are Produced
First, we test that under normal conditions, we won't have a log generated.
reversed_url = reverse("privacy-policy")
with self.assertNoLogs(logger.name, "WARNING"):
# there should not be a log
response = self.client.get(reversed_url)
self.assertEqual(response.request.get("PATH_INFO"), self.flatpage.url)
Update FlatPage to Cause a Log to be Produced
Now let's update the URL field of the FlatPage object so the conditional is caught, and a log is generated.
self.flatpage.url = "/not-the-correct-url/"
self.flatpage.save()
Produce a Log and Check That Its Correct
When we run the following test, our view will encounter a path that differs from the stored URL and a log will be produced.
reversed_url = reverse("privacy-policy")
with self.assertLogs(logger.name, "WARNING") as cm:
# a log should be produced letting us know we have a bad url
response = self.client.get(reversed_url)
self.assertNotEqual(response.request.get("PATH_INFO"), self.flatpage.url)
log_message = "The flatpage with url '{}' doesn't match the request path '{}'."
self.assertEqual(
cm.output,
[f"WARNING:{logger.name}:"
f"{log_message.format(self.flatpage.url, response.request.get('PATH_INFO'))}"]
)
First, we check that the path and URL field aren't equal. This is somewhat of a sanity check to ensure our test is set up properly. Next, we build the log message based on what's present in the view. Then we compare the output stored by the context manager to make sure the log produces the message we expect.
Things To Note
The logger used in the view doesn't have an explicitly defined name. Instead the logger was initialized as:
import logging
logger = logging.getLogger(__name__)
In this situation, the __name__ will be the module name. Not something that's likely to change often in our app. But just the same, I wouldn't write the logger's name explicitly in the test. Instead, import the logger from the view into the test and access its name directly. That way, if the module name ever changes, this test doesn't break.
from myapp import logger
with self.assertLogs(logger.name, "WARNING") as cm:
...
Bringing It All Together
With all three parts of the above test plus imports and set up, we have:
from django.test import TestCase
from django.contrib.flatpages.models import FlatPage
from django.urls import reverse
from utils.flatpages.views import logger
class TestFlatPageByIdView(TestCase):
fixtures = [
"utils/fixtures/sites.json",
"utils/fixtures/flatpages.json",
]
@classmethod
def setUpTestData(cls):
cls.static_url = "/privacy-policy/"
cls.reversed_url = reverse("privacy-policy") # flatpage id 1
# noinspection PyUnresolvedReferences
cls.flatpage = FlatPage.objects.get(id=1)
def test_logs_bad_url(self):
# path and url should equal
with self.assertNoLogs(logger.name, "WARNING"):
# there should not be a log
response = self.client.get(self.reversed_url)
self.assertEqual(response.request.get("PATH_INFO"), self.flatpage.url)
# now update flatpage with a bad url so a log is produced
self.flatpage.url = "/not-the-correct-url/"
self.flatpage.save()
with self.assertLogs(logger.name, "WARNING") as cm:
# a log should be produced letting us know we have a bad url
response = self.client.get(self.reversed_url)
self.assertNotEqual(response.request.get("PATH_INFO"), self.flatpage.url)
log_message = "The flatpage with url '{}' doesn't match the request path '{}'."
self.assertEqual(
cm.output,
[f"WARNING:{logger.name}:"
f"{log_message.format(self.flatpage.url, response.request.get('PATH_INFO'))}"]
)
The above initializes a FlatPage object using Fixtures. Because the FlatPage model uses the Django Sites Framework as a dependency, I also import this fixture and do so first. I keep my fixtures stored in a "utils" directory. You could instead create these objects directly in your setUp or setUpTestData methods if you didn't want to use Fixtures.
Last, I want to mention that I would include a number of other tests along with the logging test. To start, I would test the reversed URL against a static URL as well as whether the correct template was used. What's discussed here is just one component to a broader test of a view function.
Final Thoughts
Logs need to be tested. Especially being that a log is generally produced within the context of a serious event. This may be a less-frequently encountered test situation. But a situation that needs to be tested, nonetheless. Fortunately, both Python and Django provide for easy-to-use tools to do just that. With the above, we test both sides of the condition (a situation that produces a log and one that doesn't) and then we test to make sure we produced the message correctly. By including what's discussed here, you're well on your way to making sure your logging will function as desired.