How to Build CLI Applications That Don't Suck

I've been building command-line tools for my own workflows, and I've learned a few things about what makes a CLI actually pleasant to use. Here's what I wish someone had told me earlier.

I’ve been on a bit of a CLI kick lately. After writing about why I prefer CLIs over MCPs, I realized I should probably share what I’ve learned about building them. Because let’s be honest—most command-line tools are awful to use.

You know the ones. Cryptic error messages. No help text. Bizarre flag conventions. The kind of tool that makes you question whether the developer has ever actually used a computer.

I’ve been guilty of this myself. My early CLI tools were embarrassingly bad. But after building a few that people actually enjoy using—like queuestack and docset2md—I’ve figured out some patterns that make a real difference.

Start With a Real Problem

This might sound obvious, but hear me out. The best CLI tools I’ve built started with a genuine itch I needed to scratch. Not “this would be a cool project” but “I’m doing this repetitive thing every day and it’s driving me insane.”

When you’re solving your own problem, you make different decisions. You care about the little details because you’re the one who’ll suffer if you get them wrong. You don’t add features you’ll never use. You optimize for the actual workflow, not some imagined ideal.

If you’re building a CLI just to build a CLI, you’re probably going to build something mediocre. Find the pain first.

Use a Proper Framework

I see developers parsing argv by hand like it’s 1995. Don’t do this. Modern CLI frameworks handle the boring stuff so you can focus on what makes your tool useful.

For Go, there’s Cobra and urfave/cli. Rust has Clap. Python has Click and Typer. Swift has ArgumentParser. These tools give you subcommands, flag parsing, help generation, and shell completion basically for free.

Here’s the thing—users have expectations. They expect --help to work. They expect -v to mean verbose or version. They expect errors to tell them what went wrong. A good framework delivers all of this out of the box.

I wasted weeks on custom argument parsing in my first serious CLI. Never again.

Make Errors Actually Helpful

Nothing kills the user experience faster than a cryptic error message. “Error: invalid input” tells me nothing. “Error: expected a valid URL but got ‘foobar’” tells me exactly what to fix.

When something goes wrong, explain:

  • What happened
  • Why it happened (if you know)
  • What the user can do about it

Bonus points for suggesting the correct command. If someone types git staus, Git helpfully asks “Did you mean ‘status’?” That’s the kind of thoughtfulness that turns a frustrating experience into a delightful one.

I’ve started treating error messages as part of the UI. They’re not just debugging output—they’re user-facing text that deserves the same care as everything else.

Design Your Command Grammar

Good CLIs have an intuitive grammar. Users can guess commands because the patterns are consistent.

Think about kubectl: kubectl get pods, kubectl describe pod nginx, kubectl delete pod nginx. The pattern is always verb resource [name]. Once you learn it, you can guess new commands without reading docs.

Same with Git: git add, git commit, git push. Simple verbs, predictable behavior.

When designing your own CLI, pick a pattern and stick to it. If you use tool create foo for one resource, don’t use tool foo new for another. Consistency beats cleverness.

I like the tool verb [noun] [flags] pattern for most things. It reads like English and scales well as you add features.

Support Both Humans and Scripts

Here’s something I learned the hard way: your CLI will be used by scripts. Maybe your own scripts, maybe CI pipelines, maybe AI agents. Design for this from day one.

This means:

  • Proper exit codes: 0 for success, non-zero for failure. Always.
  • Machine-readable output: Support --json or --format json for parseable output.
  • No interactive prompts by default: If you need input, take it as flags. Add a --no-interactive flag if your tool has interactive features.
  • Predictable output: Don’t sprinkle random status messages unless the user asks for verbose mode.

When I built qstack, I made sure every command works non-interactively. This turned out to be crucial—an agent can call qs list --open --label bug and parse the output without getting stuck on a “Press any key” prompt.

The interactive TUI is nice for humans, but the scriptable foundation is what makes the tool actually useful.

Think About Configuration

Nobody wants to type --database postgres://localhost:5432/mydb every single time. Good CLIs support configuration at multiple levels:

  1. Flags: Always available, highest priority
  2. Environment variables: Great for secrets and CI
  3. Config files: For persistent project or user settings
  4. Sensible defaults: Because most of the time, the common case is obvious

The lookup order should be: flags > env vars > config files > defaults. This way, users can set defaults in config but override them when needed.

For config files, pick a standard format—YAML, TOML, or JSON—and stick to it. Don’t invent your own. And document where the config file should live.

Make Installation Trivial

This is where a lot of developers drop the ball. You’ve built something great, but nobody can install it.

The best approach: ship a single binary with no dependencies. Go and Rust excel at this. One download, one file, done.

If you can’t do that, support package managers. Homebrew for macOS, apt for Debian, the relevant package manager for your language (pip, npm, cargo). The easier the install, the more people will actually use your tool.

Distribution through Docker works too, especially for complex tools, but it adds friction. A native binary is almost always better if you can manage it.

Add Shell Completion

This is one of those “small” features that makes a massive difference in daily use. Shell completion lets users press Tab to complete commands, flags, and arguments.

Most CLI frameworks can generate completion scripts for bash, zsh, and fish. It’s usually just a few lines of code, and users will love you for it.

Even better: implement dynamic completion. When the user types tool deploy <Tab>, show them a list of available targets fetched at runtime. This turns your CLI from a tool into an interactive environment.

Document Inline

External documentation goes stale. Documentation inside the tool stays current because it’s part of the code.

Every command should have useful --help output. Not just “runs the thing”—actual helpful text explaining what the command does, what flags are available, and examples of real usage.

Examples are crucial. Show the command, show what it does, show the output. Users learn by example, and good examples in help text save everyone time.

$ mytool deploy --help
Deploy an application to the specified environment

Usage:
  mytool deploy <app> [flags]

Examples:
  # Deploy to staging
  mytool deploy myapp --env staging

  # Deploy with specific version
  mytool deploy myapp --env production --version 1.2.3

Flags:
  --env string      Target environment (staging, production)
  --version string  Specific version to deploy (default: latest)
  -v, --verbose     Show detailed deployment progress

This is the bar. Your help text should look something like this.

Make It Pretty (But Not Too Pretty)

A splash of color goes a long way. Green for success, red for errors, yellow for warnings. It’s not just decoration—it’s information design.

But restraint matters. I’ve seen CLIs that look like a rainbow exploded in my terminal. That’s not helpful. Use color to guide attention, not to show off.

And always respect the environment. When output is being piped to a file or another program, disable colors automatically. Check for NO_COLOR environment variable. Some users have accessibility needs or terminal themes where your color choices don’t work.

Progress indicators are nice for long-running operations. A simple spinner or progress bar tells users “I’m still working” without overwhelming them with output.

Test It Like a Product

CLI tools need testing just like any other software. But CLI testing has some unique considerations.

Test the happy path, obviously. But also test:

  • Invalid input (malformed flags, missing arguments)
  • Edge cases (empty results, very large inputs)
  • Error conditions (network failures, permission issues)
  • Output format (does --json actually produce valid JSON?)

I write integration tests that actually invoke the CLI binary and check its output and exit codes. Unit tests are fine for internal logic, but the CLI is your public API—test it at that level.

Keep It Focused

The best CLI tools do one thing well. They don’t try to be everything to everyone.

Look at jq. It filters JSON. That’s it. And it does it brilliantly. If jq also tried to be a web server and a database client and a coffee maker, it would be mediocre at all of them.

When you’re tempted to add a feature, ask yourself: does this belong in this tool, or should it be a separate tool that composes with this one? The Unix philosophy exists for a reason.

Build Something You’d Use

After all these tips, here’s the real secret: build something you’d actually want to use. Put it in your daily workflow. Feel the pain when something doesn’t work right. Fix it.

The CLIs I’m proudest of are the ones I use constantly. Every rough edge gets smoothed because I keep bumping into it. Every missing feature eventually gets added because I need it.

You can’t simulate that by imagining users. You have to be the user.

Good luck building. And when you ship something useful, let me know—I’m always looking for new tools to add to my workflow.