Makefiles are as old as time in the world of software development, but over the years their usefulness has waned due to the rise of IDEs and project files like .csproj/.sln in dotnet, or package.json in Node. However recently I've found myself wanting to use them again and here's why:
  1. I have projects that span multiple technologies (e.g. dotnet back-end, Node.js based front-end, terraform/dockerfile/helm IaC, documentation as code etc.).
    • From a single `make run` entrypoint, you can spin up multiple systems in parallel, such as calling `dotnet run` and `npm run dev`.
    • Similarly you can create a single `make test` that will run all tests across tech stacks.
    • Even with a single technology, like dotnet, often one solution can contain an API plus one or more serverless functions so the startup projects can easily adjusted.
    • Makefiles can call other makefiles, so you can create specialised "makefiles" within things like "/src/backend", "/src/frontend", "/docs", "/build", "/tests" etc. and then call them from the "uber makefile" at repo level (similar to the workspace concept)
    • Make supports bash completion, so you can quickly see what targets are supported
  2. I have dependencies that are outside of the development environment (e.g. docker compose files to spin up containers).
    • Creating a make target to spin up your Docker Compose containers means you never have to remember to do it manually before starting the project.
    • Some containers themselves have dependencies, for example passing in a .pfx file, which you can also automate the creation of when it doesn't yet exist.
    • You can call CLIs that are outside of the scope of your development technology, such as running ngrok to serve external traffic.
  3. I have a mixture of package managers/CLI tools used to build things, and want a "standard" way to build (e.g. npm, pnpm, yarn, dotnet, docker, helm).
    • When you're hopping between many different CLI tools from `dotnet`, `npm` (and the variants), `docker`, `expo`, `tf` and the rest, it's easy to forget the build syntax for the particular project - rather than putting that into a README, just put it into the makefile!
    • It's a lot easier to remember `make build` than, [p]npm/yarn/dotnet/docker/tf/helm build --Flag1 --Flag2 etc.
  4. I like that I can automatically run `npm i` whenever it needs to run (e.g. only when the package.json has changed).
    • By making the target "node_modules" depend on "package.json", Make will only run the script, e.g. `npm i`, when package.json has changed, or if node_modules doesn't exist.
  5. I can implement and test my build pipeline locally, then just call `make build` or `make test` from the build server.
    • Building for production usually at least requires calling some CLI tool with the correct arguments (e.g. release configuration, code coverage settings, environment variables etc.).
    • Making the build/test script run consistently between local and build servers make it much easier to debug.
Of course there are other technologies at your disposal, such as using your .sln file configurations and build targets, using npm workspaces, using scripting languages like bash/powershell/JavaScript, using Dockerfiles to do everything. I've settled on makefiles being a good/standard technology stack agnostic "entry point" which can then tap into the other tooling for the specific project, even it that's just "turbo" which takes care of the rest. In my experience, 9 times out of a 10 a Git repo will end up with at least 2 different CLI tools required to build everything when you factor in source code, documentation, infrastructure as code, tests, CI/CD pipelines. If you're thinking you might give it a try there's some things to watch out for:
  1. "Make" isn't Docker, so you can't guarantee the tools you're calling in your scripts are installed.
    • This is no different to anything else, whether a README full of commands, and .sh/.ps1/.mjs file full of commands, or even a ".sln" file requires something be installed first.
    • Just be sensible to use the "common tooling" of ecosystem you expect developers to have (npm/node/dotnet/docker for example)
  2. Makefiles aren't bash, so be careful of syntax
    • They look a lot like bash, but they're not - if you want to use the shell you'd need to wrap the code in $(shell COMMAND), but be careful of cross-platform issues as it still might not be bash!
    • This means bash scripts aren't always lift-n-shift into makefiles
  3. Phony targets always run
    • Since they're not backed by a physical file, "make" can't compute if something has "changed", so it has to run them
    • This could mean you're running things unnecessarily multiple times, which isn't a problem if the underlying tool is idempotent or performant (e.g. running `docker compose up -d` when it's already running doesn't really do much)
  4. Consider cross platform usage
    • Make comes with most Linux distros, but not Windows and probably not iOS - so if you're sharing a repo with multiple OS's you need to consider if "make" is still the best tool
Here's are a couple of example extracts of makefiles. Example 1: Shows targets for running a dotnet API, with some Azure Functions (an Event Grid handler and an Edge Api), ngrok, spinning up a Docker Compose project as well as creating a shared .pfx file mounted into a container:

.PHONY: run
.PHONY: run-api
.PHONY: run-containers
.PHONY: run-EventGridHandler.ExampleHandler
.PHONY: run-ExampleEdge
.PHONY: run-ExampleEdge-api
.PHONY: run-ExampleEdge-ngrok

# create the local dev cert that can be used by the event grid simulator
.docker/azureEventGridSimulator.pfx:
	dotnet dev-certs https --export-path .docker/azureEventGridSimulator.pfx --password ExamplePW

run-api: run-containers
	cd Api && dotnet run
	
run-EventGridHandler.ExampleHandler: run-containers
	cd Func.EventGridHandler.ExampleHandler && func start
		
run-ExampleEdge:
	@$(MAKE) -j 2 run-ExampleEdge-api run-ExampleEdge-ngrok

run-ExampleEdge-api: run-containers
	cd Func.ExampleEdge && func start --port 7073
	
run-ExampleEdge-ngrok:
	ngrok http --domain=example.ngrok-free.app 7073	> /dev/null
	
run-containers: .docker/azureEventGridSimulator.pfx
	docker compose up -d

# run everything
run:
	dotnet build && \
	make -j 3 run-api run-EventGridHandler.ExampleHandler run-ExampleEdge
Developers can run `make <target>` for specific projects only, or run `make run` to spin up everything. Example 2: Shows building two front-end projects, in production mode and copying static assets to build output of Next.js

.PHONY: run
.PHONY: build
.PHONY: build-widget
.PHONY: build-site

nextjs-website/node_modules: nextjs-website/package.json
	cd nextjs-website && pnpm i

build-site: nextjs-website/node_modules
	cd nextjs-website && export NODE_ENV=production && pnpm run build && cp -r .next/static .next/standalone/.next/static && cp -r public .next/standalone

build:
	@$(MAKE) -j 2 build-site build-widget 

javascript-widget/node_modules: javascript-widget/package.json
	cd javascript-widget && pnpm i
	
javascript-widget: javascript-widget/node_modules
	cd javascript-widget && export NODE_ENV=production && pnpm run build
The build pipeline simply has to `corepack enable && make build` without worrying about correctly assembling the outputs. In summary, "make" provides a convenient way to package up "commands" as "targets" and then create a dependency graph amongst those targets, such that running a single command can daisy chain together all of the pre-requisites. It's mainly suited to Linux/Unix so can be very useful to create "CI/CD provider agnostic" build scripts (i.e. not using provider tasks) that can then be called from the provider yaml pipelines and/or can be used locally to execute the build steps or to spin up multi-faceted solutions easily.