Martin Helmut Fieber

Building awesome internal tools

Posted on

A screenshot of the start screen one of my internal tools to manage content on my websites. The design is my own retro-pixel UI I love looking at; a dark base with bright blue, red, and green accents.

My blog, my opinions

As this is my personal blog I sure hope every reader knows that my opinions will be part of it. Nevertheless, I want to emphasise this, especially for this article, that I put together based on my time in software development, with what I experienced and what worked well.

Still, it always depends on different factors and the given context — am I my own user, work for a small team, a large company, hobby, professional, budget, and many more. Having this out of the way, let's go.

It's not only good, …

Internal facing tools can be a lot of things; large and small GUI applications, code libraries, script tools, APIs, or even workflows for teams. Quite often, in my past, those tools were rather technical.

Configuration through JSON, YAML, or XML; run through Jenkins, GitHub Actions, or a command line script. A search that is based on regular expressions, a Domain Specific Language (DSL), a release job used by non-technical people triggered through a cURL call, or a REST API so good, it "documents itself".

… it's good enough!

I wrote this just for myself, or my colleagues — I know, and they can ask me. Those designers just need to learn Git, my manager how to use cURL, they will manage. I will remember. In the end, I wrote the tool, what more?


This cannot be it. I am a user, my future self will not remember, and no, those designers should absolutely not be forced to learn Git*.

Internal tools often get kind of a different status assigned, may it be from ourselves to us, or in a larger context of a company. Bugs get accepted, complexities sold as necessities, documentation is pointing to a chat thread, calling it a day.

I say no! Good enough, is just not good. I should treat myself better, my future self, my colleagues, and those that come after. Caring about internal tools and those users should be as natural as, hopefully, the care given to external users or anyone else.

But why?

Better internal tools will result in less errors, save time, and reduce cost. Removing friction is another reason to even create internal tools in the first place, so why only go half way?

A good and comprehensive documentation reduces time searching for answers and help. Fewer bugs lead to less frustration, and less time spend working around those issues. Happy users are more productive users, if that is me or hundreds of others makes no difference.

All this, increasing confidence in a tool, therefore confidence in my own abilities, or the ones of my team. Reducing maintenance costs, increasing reusability, and getting easier buy-ins later from users, management, or the infamous future self.


I'll try to be constructive, not only talking about high level concepts, but where possible providing specific examples on how to create and improve internal tools. All of it should work from being a single person creating something amazing, to a team working in a company, and I'll see to explain how this applies for those cases.

But be aware, this is not an exhaustive list, nor a set of rules. More like guiding on what worked for me in the past, from private projects to the years of development in companies, small and large. I did not eat wisdom with a spoon, though I'm passionate about tool development and want us all to build great software and workflows. On top of that, many of the points I'll bring up work for general software development as well.

With that being said, let's build awesome internal tools.

1. Me, user, colleague — what's the difference?

Developing internal tools means, generally speaking, developing for colleagues, yourself, or both. Users should not be treated differently because of that, though this is often the case. Talking about those tools, starting sentences with "This is just for …". Just for whom? The fundamental key in this endeavour, the person or group that will work with the tool.

The mindset is important. How we speak, is important. This is not just; this is for my amazing users, for my capable colleagues, this is for the magnificent me, now and future; always remember.

2. Ask and listen, never assume

Too often I see tools developed purely based on assumptions. Assuming how something should work, how people work together, what the current process looks like. Sometimes going ahead with a great idea, releasing it, not once having the actual user had a look at it before. Assumptions Driven Development.

This is a problem that can easily be resolved. Even with the greatest of ideas, ask and listen first. What is really wanted? Where is the current problem or friction? What needs to be automated? Planning goes a long way, and planning also means getting all the requirements right first, by talking to the target audience.

What if this is just me? There it is again, just. Even for myself I try to nail down what I really want. Do I need to automate those two clicks here? Do I always want to write regular expressions for my most powerful search, yet? When I want to extend this software later, is this exhaustive plugin system really the right solution? I'm a technical person, but is a technical solution what I want?

It may seem obvious reading this, but reality shows it is not. Never assume, get all requirements, involve users early. Just one benefit of internal tool development and reachable users.

Besides, involving users is letting them collaborate early, shaping the future of what they will use (probably) daily, increasing the emotional value attached to it, and therefore often being more forgiving on bugs or small issues.

3. Watch them work

Ask and listen is nice, but like a picture says more than 1000 words, watching someone work with a tool, workflow, or library can give more insight than any question could. Quite often what others see as "not worthy to mention" when talking about a workflow can offer great potential for optimisation. Important here is to capture this session in any capacity, e.g.: written down, pictures taken, a video captured.

Too often I had a colleague skip over the fact they use two different tools for almost the same task, for some reason that resulted in saying "this is how we always did this". There is no blame to the user in this, as long time usage often leads to blind spots.

Now, how do I watch myself work, creating tools for my own needs? Observe via screen recording! Starting a recording and letting it run for a duration of time, doing the usual tasks, test a new tool, work on integrating it into an existing workflow. And then review the recording. This will help identify barriers and reveal blind spots, giving a different perspective.

4. The time in-between

From experience, automation and optimisation of existing workflows and tools is not always what brings the biggest benefit. Looking at a day of work, where is all the time spend? Is it actually inside a tool, worth the effort to reduce four clicks to one, or is it context switching between different tools?

Often the second, context switching, or glue-work*. If this is the case, this is where effort should be spent.

Removing time in-between can mean changing the tool landscape or composition. Where an old process is three different tools to solve a given task, it can become clear that two of those tools are always used together. This offers the potential for unification, changing the composition, to remove lost time in context switching.

If tools cannot be unified, it is worth looking into other ways to reduce time. There could be a specific feature needed from one tool that can be transferred to another. Even if this means duplication, it can be worth it.

What if inside my tool landscape one tool is owned by another team or person? Talk to them, listen, show them the problem and, in the best case, ideas how to solve it. Or try working with them on solving the issue. If this is for any reason not feasible, or they can or will not help, try a different way. Feature duplication, contribute to an external codebase, get creative.

And sometimes, accept what is given. Create documentation about the issue and what was tried and discussed, and move on, spending your valuable time elsewhere.

5. Looks do matter

There once was a documentation tool from a well-known company. It offered all one can dream of; editing without technical knowledge, a visual editor, rich-media support, integrations into other tools, even sharing documents with external users, and many features more. Yet, no one wanted to use it. People even reacted with disgust, calling it a graveyard and worse. Why?, one might ask. It was not sexy.

Even if a bit exaggerated, more often than not, looks can be the reason a totally capable tool is not accepted by users. What counts as good-looking is of course very subjective. The peak of beauty for one, can be jarring for others.

When working alone, creating tools for myself, it can be whatever I like most. Personally I love a retro futuristic style, cassette futurism, so I developed a theme and corresponding libraries I can use for all my tools. It is a retro pixel style, dark but with brightly coloured accents, and otherwise kept very simple. The image of this article shows the exact theme in question. I love looking at it, love using my tools.

In a company, for many different users, it can be best to get someone who has knowledge forming the user experience with design background (UI/UX). Much can be learned and taught, but a professional is a professional — and best utilised from the beginning if available.

If that's no option, there is no shame in copying the style of others. One of the powers of developing an internal tool is freedom, also the freedom to copy. There are many great examples out there that can be adapted to your use-case. Alternatively, a theme library can be another option.

For the commonly used operating systems, Apple offers a list of great resources for human interface guidelines, Microsoft provides resources on how to design and code apps for Windows, and there is even a GTK+ list of human interface design guidelines. For Dear ImGUI, there is a great GitHub issue with lots of comments about designs and themes.

The W3C's community group Open UI offers some great resources for web-based design systems. The Design System Repo is another great choice with examples, articles, and tools.

Looks are not restricted to GUI applications. A terminal UI is UI, too; a CLI can look nice; help text printed via --help formatted to be more readable. Even further, error messages should "look good", aka. be very readable by a human.

Instead of directly printing an error, the thought should always be "what error message actually helps". Only showing what is a direct help to resolve the problem. There can also be an error code, or an additional tool to get further guidance on a problem. And if the user cannot do anything, how about not showing an error at all? Try to tell the user what will happen next, run a recovery, in the worst case restart, but be transparent.

6. Documentation Driven Development

Test driven development is nice, but have you ever tried documentation driven development? It is the simple concept of writing the documentation first, before anything else. Specifying how a library is used, a program should work, an API look, a CLI behave — for the user.

This way of starting a project made the biggest change for my work, to the (a lot) better. Usually I started right with writing code, sometimes tests first, sometimes not. Documentation came always later, always; sometimes never.

From the day I started working documentation driven, my tools got better, APIs more usable, extendable, and readable. This may sound like an exaggeration, but it is not. This really changed how I approach any tool since then.

This works on any scale, too. Creating a new repository, a readme should be the first file, filled in and final. How to use the code, or library, in detail, for all that is planned. It can also be applied to a single new feature for a bigger codebase, or the next iteration of an application.

It makes me really think how I want something to be used in the best possible way on a detailed level. Being the best version of myself, I implement tests next, including all code examples from the documentation, if present. Not because someone said this is how I should do it, but because it solves my next set of problems, the technicalities.

At that point, I have documentation, tests, and a big smile on my face, hopefully. No matter how my implementation now looks, it is tested and can be used, with an API I'm actually happy with.

7. Prototype, iterate, implement

Even on internal tooling, tackling a large new application or feature by starting of with a prototype is crucial to a successful adoption. A prototype in that context can be many things, a dummy created with a design tool, screens of what to expect, documentation explaining what will be available, going back to the documentation driven approach.

Important is not what the prototype is, but that it exists, and can be used to iterate on, together with users. If future users are already happy on that stage, they will look forward to actually work with the new tool they get, well knowing what to expect.

What if they're not happy? Iterate! Until the point they are. This can feel sometimes tedious, is often hard, and can take quite some time. But this initial time spend is well worth it, avoiding late conceptual changes and bigger refactors.

Some great tools to create visual prototypes are Penpot, free, open-source, and can even be self-hosted; Figma, offering a free tier or a subscription based price; and Sketch, paid with either a monthly subscription or a one time payment, also offering free tiers for education.


At this point, looking at all the previous guiding: The problem space is fully understood, the documentation already written, a prototype approved by happy users, who look forward on getting a new shiny toy, tests maybe implemented as well, giving the tool implementer the highest confidence to actually go into development.

This may sound like an unreachable scenario, and as context is everything, it sometimes is, but it is a scenario to strive for. In the end, as guidelines go, it is all about picking what works best for a given situation.

8. Imagine going public

A small mental exercise, look at an existing internal tool, and think about what would need to be done to go-public. Is it a lot? Usual candidates are missing documentation, a user experience to be improved, or no way to monitor if everything actually works.

This is not about actually going public, as a lot of internal tools cover company internal processes or are integrated into systems that can not be disclosed for any reason. This is purely about the what-if, from a user perspective, and the therefore emerging question of Why?

Can my colleagues not get the best experience — can I? Why is that? Thinking about what is necessary to go-public can be a valuable exercise to harden and improve a tool. One of those, often neglected, is monitoring.

9. Monitor

Why should I monitor a tool used by a few? They will just write me when something does not work, won't they? Often not. For every user that wrote a bug ticket or in a chat, even more did not. Sometimes they find creative solutions, working around a problem or limitation of a tool, in the end spending more time over all.

Monitoring will help catch this, early. Having a central place for where your logs go, observing application performance, receiving alerts on errors, recording user sessions when problems keep them stuck. There are many aspects of monitoring, and it all depends on the needs and goals set.

It is valuable to gather information, to help the users, to solve problems quicker, and catch errors early. The more robust and hardened the software, the smaller the maintenance cost, saving time for everyone.

In my past I had some great experience with Datadog and Sentry, both great tools to support different monitoring needs. To be fair, always in a context of multiple users, not when I worked on my own.

10. How to replace existing tools?

This is the tricky one, and quite often the hardest requirement. Creating a new tool, to replace one or more in existence. There can be many reasons why a full replacement is needed, and before even considering there should always be evaluated if one or more tools can be refactored or improved instead, but when the decision is made, a big task lures ahead.

An often very optimistic view is that a subset of features on a new tool will be enough to start replacing the existing one. This is not true, a new tool will need feature parity to get user acceptance. If this is not the case, users will, when having the option, continue using the old tool and workflow.

It is possible to bring users over to a not yet fully feature complete replacement, when providing a clear benefit that makes it worth it with a subset of the functionality. Nevertheless, all previous tasks need to be solvable in some capacity, with the old tool, a temporary alternative, or a new minimal version of what already is in place.

What does help while developing the replacement in parallel is a feature freeze on the old tool. It is also well worth to reuse what possible, enabling a faster go-to-user. Reusing not only means code or design, it can also be a data format. Being able to run new and old in parallel is a huge benefit in this transition period.

Regardless of approach, the main goal needs to be that the new tool does provide a clear benefit, not just recreating what already existed (see point 3), with a 100% adoption rate. Not planning to sunset the old tool is a mistake that will increase maintenance cost.

11. A new release

Rolling out a new release should be done with confidence and transparency. Confidence in having a setup that allows easy rollouts of new features to all users and being transparent in what will be released by announcing the changes. A good changelog will make a difference.

Changelog generation is a very common practice, by taking commit messages and packing them into a list of changes. Often done with tools like Commitizen, where commit message rules define change log categories and content. I would not use this targeting users.

This kind of generated changelog is very technical, may be useful for developers working on a tool, but not the end user. A good user changelog is human-readable, with focus on explaining or even showing new features, and what user reported issues were fixed.

It doesn't even need to be complicated, or a lot of text — images or a recordings go even further. Anything that helps your user understand what is new and how it works, maybe even making them excited to try out the new release.

12. You're never really done

Ever heard "When we finish this tool we build the next"? You're never done. Every new tool is a new responsibility. Fixing bugs, supporting new operating systems, adding features. Saying one is done equals abandonment.

This is not a problem, but an opportunity — instead of creating a new tool, an existing one can be extended. Alternatively, maybe there is something "off the shelf". Before creating new, all options need to be considered.

Now, when creating a new tool and keeping future responsibility in mind, extensibility should be considered. Giving the user methods to extend the tool, write plugins, create visual extensions, support admin roles that can configure aspects for others. Generally, making it possible for users to solve new problems with what is given.

13. Above and Beyond

Going the extra mile is worth it. As said in the beginning: Good enough, is just not good. Of course this is not always the case, but we should all challenge ourselves to really go above and beyond.

Thinking from the users perspective; fixing bugs, for example with reoccurring bug squashing days; getting regular feedback, this could be done by sending out short user surveys*; removing friction by looking at the time spend between tools; collaborating with others, not stopping at team borders.

In the end, it shouldn't even be going the extra mile, it should never be good enough. It should be going all the way, treating the target audience with respect and care, creating amazing internal tools.


It turned out I had more to say about tool development as I initially expected. Even though I numbered every part, there is no associated order; I rather tried to tell a coherent story. Like I initially said, context will decide what is important, sometimes I do a few, sometimes everything.

As guidelines go, I pick, mix, and match, as I please. Though, if there is one thing to remember, between tool creator and user, and as the great Bill and Ted said, "Be excellent to each other".

Until then 👋🏻

← Show all blog posts