Martin Helmut Fieber

Building awesome internal tools

Posted on — Updated

A screenshot of the start screen one of my internal tools to manage content on my websites. The design is my own retro-pixel UI I love looking at; a dark base with bright blue, red, and green accents.

My blog, my opinions

As this is my personal blog, I sure hope every reader knows that my opinions will be part of it. Nevertheless, I want to emphasize this, especially for this article, that I put together based on my time in software development and what I experienced and what worked well.

Still, it always depends on different factors and the given context — am I my own user, work for a small team, a large company, hobby, professional, budget, and many more. With this out of the way, let's go.


It's not only good, …

Internal-facing tools can be a lot of things; large and small GUI applications, code libraries, script tools, APIs, or even workflows for teams. Quite often, in my past, those tools were rather technical.

Configuration through JSON, YAML, or XML; run through Jenkins, GitHub Actions, or a command-line script. A search that is based on regular expressions, a Domain Specific Language (DSL), a release job used by non-technical people triggered through a cURL call, or a REST API so good that it "documents itself".

… it's good enough!

I wrote this just for myself or my colleagues — I know, and they can ask me. Those designers just need to learn Git, my manager how to use cURL, they will manage. I will remember. In the end, I wrote the tool; what more?

No!

This cannot be it. I am a user; my future self will not remember, and no, those designers should absolutely not be forced to learn Git*.

Internal tools often get kind of a different status assigned, may it be from ourselves to ourselves or in the larger context of a company. Bugs get accepted, complexities are sold as necessities, and documentation points to a chat thread, calling it a day.

I say no! Good enough is just not good. I should treat myself, my future self, my colleagues, and those that come after me better. Caring about internal tools and those users should be as natural as, hopefully, the care given to external users or anyone else.

But why?

Better internal tools will result in fewer errors, save time, and reduce costs. Removing friction is another reason to even create internal tools in the first place, so why only go half-way?

Good and comprehensive documentation reduces time spend searching for answers and help. Fewer bugs lead to less frustration and less time spent working around those issues. Happy users are more productive users; whether that is me or hundreds of others, it makes no difference.

All this increases confidence in a tool and, therefore, confidence in my own abilities or the ones of my team — reducing maintenance costs, increasing reusability, and getting easier buy-ins later from users, management, or the infamous future self.

Expectations

I'll try to be constructive, not only talking about high-level concepts but, where possible, providing specific examples on how to create and improve internal tools. All of it should work, from a single person creating something amazing to a team working in a company, and I'll try to explain how this applies in those cases.

But be aware that this is neither an exhaustive list nor a set of rules. More like guiding on what worked for me in the past, from private projects to the years of development in companies, small and large. I did not eat wisdom with a spoon, though I'm passionate about tool development and want us all to build great software and workflows. On top of that, many of the points I'll bring up work for general software development as well.

With that being said, let's build awesome internal tools.


1. Me, user, colleague — what's the difference?

Developing internal tools means, generally speaking, developing for colleagues, yourself, or both. Users should not be treated differently because of that, though this is often the case. Talking about those tools, starting sentences with "This is just for …". Just for whom? The fundamental key to this endeavor is the person or group that will work with the tool.

The mindset is important. How we speak is important. This is not just; this is for my amazing users, for my capable colleagues; this is for the magnificent me, now and in the future. Always remember.

2. Ask and listen, never assume

Too often, I see tools developed purely based on assumptions. Assuming how something should work, how people work together, and what the current process looks like. Sometimes, going ahead with a great idea and releasing it without having the actual user take a look at it before. Assumptions Driven Development.

This is a problem that can easily be resolved. Even with the greatest of ideas, ask and listen first. What is really wanted? Where is the current problem or friction? What needs to be automated? Planning goes a long way, and planning also means getting all the requirements right first by talking to the target audience.

What if this is just me? There it is again, just. Even for myself, I try to nail down what I really want. Do I need to automate those two clicks here? Do I always want to write regular expressions for my most powerful search yet? When I want to extend this software later, is this exhaustive plugin system really the right solution? I'm a technical person, but is a technical solution what I want?

It may seem obvious reading this, but reality shows it is not. Never assume, get all requirements, and involve users early. Just one benefit of internal tool development and reachable users.

Besides, involving users means letting them collaborate early, shaping the future of what they will use (probably) daily, increasing the emotional value attached to it, and therefore often being more forgiving on bugs or small issues.

3. Watch them work

Asking and listening is nice, but like a picture saying more than 1000 words, watching someone work with a tool, workflow, or library can give more insight than any question could. Quite often, what others see as "not worth mentioning" when talking about a workflow can offer great potential for optimization. Important here is to capture this session in any capacity, e.g., written down, pictures taken, or a video captured.

Too often I had a colleague skip over the fact they use two different tools for almost the same task, which for some reason resulted in saying "This is how we always did this". There is no blame for the user in this, as long-time usage often leads to blind spots.

Now, how do I watch myself work, creating tools for my own needs? Observe via screen recording! Starting a recording and letting it run for a duration of time, doing the usual tasks, testing a new tool, and working on integrating it into an existing workflow. And then review the recording. This will help identify barriers and reveal blind spots, giving a different perspective.

4. The time in-between

From my experience, automation and optimization of existing workflows and tools are not always what bring the biggest benefits. Looking at a day of work, where is all the time spent? Is it actually inside a tool, worth the effort to reduce four clicks to one, or is it context switching between different tools?

Often it is the second, context switching, or glue-work*. If this is the case, this is where effort should be spent.

Removing time in between can mean changing the tool landscape or composition. Where an old process uses three different tools to solve a given task, it can become clear that two of those tools are always used together. This offers the potential for unification, changing the composition, and removing lost time in context switching.

If tools cannot be unified, it is worth looking into other ways to reduce time. There could be a specific feature needed in one tool that can be transferred to another. Even if this means duplication, it can be worth it.

What if, inside my tool landscape, one tool is owned by another team or person? Talk to them, listen, show them the problem, and, in the best case, give them ideas on how to solve it. Or try working with them on solving the issue. If this is for any reason not feasible, or they can or will not help, try a different way. Feature duplication, contributing to an external codebase — get creative.

And sometimes, accept what is given. Create documentation about the issue and what was tried and discussed, and move on, spending your valuable time elsewhere.

5. Looks do matter

There once was a documentation tool from a well-known company. It offered all one can dream of: editing without technical knowledge, a visual editor, rich-media support, integrations into other tools, even sharing documents with external users, and many more features. Yet no one wanted to use it. People even reacted with disgust, calling it a graveyard and worse. "Why?", one might ask. It was not sexy.

Even if a bit exaggerated, more often than not, looks can be the reason a totally capable tool is not accepted by users. What counts as good-looking is, of course, very subjective. The peak of beauty for one can be jarring for others.

When working alone and creating tools for myself, it can be whatever I like most. Personally, I love a retro-futuristic style, cassette futurism, so I developed a theme and corresponding libraries I can use for all my tools. It is a retro pixel style, dark but with brightly colored accents, and otherwise kept very simple. The image in this article shows the exact theme in question. I love looking at it and using my tools.

In a company, for many different users, it can be best to get someone who has knowledge of forming the user experience with design background (UI/UX). Much can be learned and taught, but a professional is a professional — and best utilized from the beginning if available.

If that's not an option, there is no shame in copying the style of others. One of the powers of developing an internal tool is freedom, including the freedom to copy. There are many great examples out there that can be adapted to your use case. Alternatively, a theme library can be another option.

For the commonly used operating systems, Apple offers a list of great resources for human interface guidelines; Microsoft provides resources on how to design and code apps for Windows; and there is even a GTK+ list of human interface design guidelines. For Dear ImGUI, there is a great GitHub issue with lots of comments about designs and themes.

The W3C's community group Open UI offers some great resources for web-based design systems. The Design System Repo is another great choice with examples, articles, and tools.

Looks are not restricted to GUI applications. A terminal UI is UI, too; a CLI can look nice; help text printed via --help formatted to be more readable. Even further, error messages should "look good", aka. be very readable by a human.

Instead of directly printing an error, the thought should always be, "What error message actually helps?". Only showing what is directly helpful to resolve the problem. There can also be an error code or an additional tool to get further guidance on a problem. And if the user cannot do anything, how about not showing an error at all? Try to tell the user what will happen next, run a recovery, or, in the worst case, restart, but be transparent.

6. Documentation Driven Development

Test driven development is nice, but have you ever tried documentation-driven development? It is the simple concept of writing the documentation first, before anything else. Specifying how a library is used, a program should work, an API look, and a CLI behave — for the user.

This way of starting a project made the biggest change for my work, for the better. Usually I started with writing code; sometimes tests came first, sometimes not. Documentation always came later, always; sometimes never.

From the day I started working documentation-driven, my tools got better, APIs more usable, extendable, and readable. This may sound like an exaggeration, but it is not. This really changed how I approach any tool since then.

This works on any scale, too. When creating a new repository, a README should be the first file filled in and final. How to use the code, or library, in detail for all that is planned. It can also be applied to a single new feature for a bigger codebase or the next iteration of an application.

It makes me really think about how I want something to be used in the best possible way, on a detailed level. Being the best version of myself, I implement tests next, including all code examples from the documentation, if present. Not because someone said this is how I should do it, but because it solves my next set of problems, the technicalities.

At that point, I will hopefully have documentation, tests, and a big smile on my face. No matter how my implementation now looks, it is tested and can be used with an API I'm actually happy with.

7. Prototype, iterate, implement

Even on internal tooling, tackling a large new application or feature by starting off with a prototype is crucial to its successful adoption. A prototype in that context can be many things: a dummy created with a design tool; screens of what to expect; documentation explaining what will be available, going back to the documentation-driven approach.

It is not important what the prototype is, but that it exists and can be used to iterate with users. If future users are already happy at that stage, they will look forward to actually working with the new tool they get, knowing what to expect.

What if they're not happy? Iterate! Until they are happy. This can sometimes feel tedious, is often hard, and can take quite some time. But this initial time spent is well worth it, avoiding late conceptual changes and bigger refactors.

Some great tools to create visual prototypes are Penpot, free, open-source, and can even be self-hosted; Figma, which offers a free tier or a subscription-based price; and Sketch, paid with either a monthly subscription or a one-time payment, also offering free tiers for education.

Interlude

At this point, looking at all the previous guidance, the problem space is fully understood, the documentation is already written, a prototype is approved by happy users who look forward to getting a new shiny toy, and tests may be implemented as well, giving the tool implementer the highest confidence to actually go into development.

This may sound like an unreachable scenario, and as context is everything, it sometimes is, but it is a scenario to strive for. In the end, as guidelines go, it is all about picking what works best for a given situation.

8. Imagine going public

As a small mental exercise, look at an existing internal tool and think about what would need to be done to make it public. Is it a lot? Usual candidates are missing documentation, a user experience to be improved, or no way to monitor if everything actually works.

This is not about actually going public, as a lot of those tools cover internal company processes or are integrated into systems that can not be disclosed for any reason. This is purely about the what-if, from a user perspective, and the therefore emerging question of Why?

Can my colleagues not get the best experience? Can I? Why is that? Thinking about what is necessary to go public can be a valuable exercise to harden and improve a tool. One of those, often neglected, is monitoring.

9. Monitor

Why should I monitor a tool used by a few? They will just write me when something does not work, won't they? Often not. For every user who wrote a bug ticket or a message in a chat, even more did not. Sometimes they find creative solutions, working around a problem or limitation of a tool, but in the end, they spend more time overall.

Monitoring will help catch this early. Having a central place for where your logs go, observing application performance, receiving alerts on errors, and recording user sessions when problems keep them stuck. There are many aspects to monitoring, and it all depends on the needs and goals set.

It is valuable to gather information to help the users, solve problems quicker, and catch errors early. The more robust and hardened the software, the smaller the maintenance cost, saving time for everyone.

In the past, I had some great experience with Datadog and Sentry, both great tools to support different monitoring needs. To be fair, always in the context of multiple users, not when I worked on my own.

10. How to replace existing tools?

This is the tricky one and, quite often, the hardest requirement. Creating a new tool to replace one or more already in existence. There can be many reasons why a full replacement is needed, and before even considering it, it should always be evaluated if one or more tools can be refactored or improved instead. But when the decision is made, a big task lies ahead.

An often very optimistic view is that a subset of features in a new tool will be enough to start replacing the existing one. This is not true; a new tool will need feature parity to get user acceptance. If this is not the case, users will, when given the option, continue using the old tool and workflow.

It is possible to bring users over to a not yet fully feature-complete replacement when providing a clear benefit that makes it worth it with a subset of the functionality. Nevertheless, all previous tasks need to be solvable in some capacity — with the old tool, a temporary alternative, or a new minimal version of what is already in place.

What does help while developing the replacement in parallel is a feature freeze on the old tool. It is also well worth it to reuse what is possible, enabling a faster go-to-user. Reusing does not only mean code or design; it can also be a data format. Being able to run new and old in parallel is a huge benefit in this transition period.

Regardless of approach, the main goal needs to be that the new tool does provide a clear benefit, not just recreating what already existed (see point 3), with a 100% adoption rate. Not planning to retire the old tool is a mistake that will increase maintenance costs.

11. A new release

Rolling out a new release should be done with confidence and transparency. Confidence in having a setup that allows easy rollouts of new features to all users and being transparent about what will be released by announcing the changes. A good changelog will make a difference.

Changelog generation is a very common practice that involves taking commit messages and packing them into a list of changes. Often done with tools like Commitizen, where commit message rules define change log categories and content — I would not use this for users.

This kind of generated changelog is very technical and may be useful for developers working on a tool, but not for the end user. A good user changelog is human-readable, with focus on explaining or even showing new features and what user-reported issues were fixed.

It doesn't even need to be complicated or have a lot of text — images or recordings go even further. Anything that helps your user understand what is new and how it works, maybe even making them excited to try out the new release.

12. You're never really done

Ever heard "When we finish this tool, we build the next"? You're never done. Every new tool comes with new responsibilities. Fixing bugs, supporting new operating systems, and adding features. Saying a tool is done equals abandonment.

This is not a problem but an opportunity — instead of creating a new tool, an existing one can be extended. Alternatively, maybe there is something "off the shelf". Before creating a new one, all options need to be considered.

Now, when creating a new tool and keeping future responsibility in mind, extensibility should be considered. Giving the user methods to extend the tool, write plugins, create visual extensions, and support admin roles that can configure aspects for others. Generally, making it possible for users to solve new problems with what is given.

13. Above and Beyond

Going the extra mile is worth it. As said in the beginning, "good enough is just not good". Of course, this is not always the case, but we should all challenge ourselves to really go above and beyond.

Thinking from the users' perspective: fixing bugs, for example with reoccurring bug squashing days; getting regular feedback, this could be done by sending out short user surveys*; removing friction by looking at the time spent between tools; collaborating with others, not stopping at team borders.

In the end, it shouldn't even be going the extra mile; it should never be good enough. It should be going all the way, treating the target audience with respect and care and creating amazing internal tools.


Epilogue

It turned out I had more to say about tool development than I initially expected. Even though I numbered every part, there is no associated order; I rather tried to tell a coherent story. Like I initially said, context will decide what is important; sometimes I do a few, sometimes everything.

As guidelines go, I pick, mix, and match as I please. Though if there is one thing to remember between tool creator and user, as the great Bill and Ted said, "Be excellent to each other".

Until then 👋🏻

← Show all blog posts