Logger Injection vs Static Logging

I work a lot with legacy applications, upgrading the internals to use the latest architectural approaches.  I often encounter older systems that do not have an IoC container, so adding one is the first thing I do.  Along with that effort goes the conversion of some random classes into services and loading those services into the container.

The first service I always tackle in these conversions is the logger.  It’s typically simple with no business logic, and nicely demonstrates the idea of how to resolve a service from a container (for those team members who are unfamiliar with IoC).  It’s also a good first step in getting all the proper references into the projects of a solution so they can use the container.  In general, adding a container and converting the logger to use it fits nicely into a single sprint.

A pattern I keep seeing, and one that makes these first few steps of a conversion much more difficult, is using a static variable as a handle to get to the logger.  I’ve seen this pattern in many of the instructional articles on the various loggers.  This is a bad practice for several reasons.

  • Static logger instances are difficult to mock in a unit test.  This means your logger is always writing logs even during unit tests when they are probably not needed.  This slows down the tests and eats up disk space unnecessarily.

  • Since you cannot mock the logger, this also means you cannot write tests to ensure an error log is written in appropriate situations.  Using Mock.Verify() is a great way of ensuring errors are logged properly.

  • Static loggers cannot be replaced at runtime to allow injection of different loggers. This can be especially important if you are releasing a library for others to use. Define a standard logger interface and log everything to that.  You can provide your own built-in logger, but also allow the user to replace that with their own preference.

  • Most of all, using the default static logger implementation provided by the logging vendor locks you into their interface.  This means you cannot hide or change the surface area of the logger. Changing loggers becomes a MUCH bigger effort if the syntax of the new logger changes.

Use a Log Wrapper

A much better alternative is to create a logging interface that does things the way you like, then a wrapper class around your logger which translates those log calls into the proper calls for your logger.

Create an interface with the logging method signatures you like.  Then add a class that implements that interface, and make the class wrap your favorite logger.  The wrapping class can just pass through any log method calls to your favorite logger so you still get the advantage of using it.  However, you still have the option of easily ripping that logger out and replacing it with something else.

It also opens up the possibility of adding more than one logger. The class that implements the interface can easily make calls to 2 real loggers for a single message, or even change its behavior depending on the environment it is living in.

Enter Blazor

With the advent of Blazor, this logger wrapper approach is even more important.  C# code can be shared between the server side and the front end.  If you implement a logger that can only run on the server, not in WASM, then your shared objects will break when trying to log data from the browser.  Implementing a logger wrapper with an interface allows you to provide different loggers on the front end (i.e. Console.WriteLine(…)) and on the back end, but using the same class.

Wrapper Examples

I used to use Log4Net quite a bit and have that logger in many systems I have built.  I am moving to use Serilog more due to its support for writing json objects.  I have created wrappers for each of these loggers which share the same interface.  This lets me quickly swap one for the other if I edit an older system with Log4Net and want to migrate it to Serilog.

If you are interested and want to save yourself  some typing, you can find these wrappers here.

By |2021-02-15T22:08:10+00:00January 30th, 2021|Legacy|0 Comments

Automated Testing for Legacy Software

Is It Worth It?

Implementing automated unit testing on a Large Legacy System is difficult.  In order to test a particular piece of the system, you need to isolate that piece so you can control all the inputs and outputs.  If your monolith is designed as a Big Ball of Mud, you may end up wasting a lot of time trying to add tests, or even abandoning tests altogether.  With targeted refactoring monoliths can be manipulated to allow testing, but it takes time.

The key phrase above is “targeted refactoring”.  It’s easy to get sucked down the rabbit hole of refactoring everything you come across because there are so many opportunities for improvement.  If you are not careful, you can invest a lot of time adding tests that give little return.

So, the first decision to make is whether the system is worth refactoring at all.  What is the future of this system?  Is it causing problems?  Is it frequently updated?  If the system fails due to a bug, how serious a problem is that?  Refactoring for unit testability is a long process, so you must ensure you will get sufficient return on the refactoring investment.

Keep the End In Mind

Once you’ve decided there is sufficient value in refactoring, you need to keep the proper goal in mind for the process.  If you try to refactor from a Big Ball of Mud directly into a Clean Architecture, you’re going to have a bad time.  You’ll need to do things in slow steps, breaking some rules in the short run until you can come back later and fix them.  The goal is not to make it perfect but make it better than it was before you started.

Make a Plan

The first step is to plan where you are going to introduce the tests.  In any application there are core objectives the app is trying to accomplish, so that is where you should spend your time.

Pretend you are building the app from scratch and decide what your core entities would be.  This doesn’t have to be an exhaustive list; if you are familiar with the app it shouldn’t take more than 30 minutes to come up with a list.  Pick 5 entities that would provide the most value if covered by testing and target them.

If this is your first attempt at refactoring the monolith for testing, you should avoid the most complicated entities even though they seem like they would provide the most value. Pulling apart the first few strands of spaghetti will be more difficult than normal, so picking a less complicated entity as your first one can make the process more manageable and show results more quickly.

Finding Seams

Now that you have your refactoring targets that will provide the most value, you are ready to find the code to refactor, which is called a seam.  Michael Feathers describes a seam as “… a place where you can alter behavior in your program without editing in that place.”  In the legacy monolith arena, a seam is a convenient place where a module or method can naturally be broken off into a separate piece.

Adding Classes

When you have found some appropriate seams, you are ready to start creating entity classes for your target entities.  It’s best to keep these new entities in a separate project to help in tracking test coverage. You can set a rule for the team that any code in that project must have 80% coverage.

As you migrate methods to your new entities (adding unit tests along the way), you need to make these new method implementations available to your monolith code.  To do this create a factory for your new entity.  This lets you inject the factory into the monolith code instead of having a hard reference to the entity.

Just Be Careful

One thing to note about the method above is that all necessary inputs are passed into the method, or available as internal properties of the class.  No access to global variables should be allowed.  If the method needs a value that is in a global variable, pass it in anyway.

You are now ready to create your first testable method.  A word of warning though.  Fixing any defects that you may find during the unit test creation could have far reaching effects.  Other parts of the monolith may rely on the method behaving in the “wrong way”, so don’t rely solely on the automated testing.  Make sure QA checks all parts of the system that are related to the new method.  Otherwise your good intentions will end up with you being the bad guy.

By |2021-02-15T22:00:47+00:00January 7th, 2021|Legacy|0 Comments