NPM scripts are often used to abbreviate longer commands with many arguments or to sequence multiple commands that are required to compile or run a Node based system. Sometimes they can get complex; if you have multiple moving parts to consider, such as conditionally setting environment variables, conditional compilation steps or multiple sequential and parallel stages of the build. This is especially true if you trying to write a startup script for automation tests, which probably need to spin up mocked APIs, build and start the system under test, wait for startup to complete then execute tests. It is equally as useful when you have multiple configurations of the application and it's supporting local development services (such as running a mock, changing an environment flag etc.) This can be achieved using daisy chained NPM scripts and helper packages such as "npm-run-all", "concurrently", "start-server-and-test", but I found that this doesn't scale well if you have multiple options for developers to choose from when running the app. A contrived example of some options you might want to give developers: - run in "development" or "production" mode (usually determined by running either "npm run dev" or "npm run build/start") - setting environment variables based on the mode (e.g. you might have used "cross-env-shell" in your npm script) - start a mocked api, or don't (maybe you'd spin this up asynchronously using "npm-run-all" or "concurrently") - build for "flavour 1" or "flavour 2" of the code (say, for example, you can choose whether to compile for REST or GraphQL at build time) You might also have some automation tests that need to: - start a mocked api (and wait for it to load) - build and run the system under test in production mode (and wait for it to load) - run the tests (in local or browserstack mode) - kill all the child processes Aside from the complexities of describing all of the above in NPM scripts, it gets very repetitive which each variation. Even with the limited choices described above you get the Cartesian product of all combinations, represented as scripts e.g.: "development mode", "with mock", "GraphQL" "development mode", "with mock", "REST" "development mode", "no mock", "GraphQL" "development mode", "no mock", "REST" "production mode", "with mock", "GraphQL" "production mode", "with mock", "REST" "production mode", "no mock", "GraphQL" "production mode", "no mock", "REST" You can remove one level of complexity by using NPM configuration settings, to switch between modes, that live in your .npmrc file such as:
mock-api='true'
api-mode='REST'
use-browserstack='false'
And then only having the usual 3 scripts in your package.json (for example using Next.js) that take into account the config settings: "dev", "build", "start" By using settings in the .npmrc file we get all the power of NPM configuration, which means by default it will use the values defined in .npmrc, but these can be overridden with environment variables (so maybe you'd set different defaults in your Dockerfile or build pipeline, than what local devs might use), or can be override with CLI arguments (so maybe you'd do this when spinning up a system from an automation test suite). The next complexity to solve is how to interpret the NPM configuration settings, such that the right build steps are executed and are sequential or parallel in nature accordingly. This where I decided that NPM scripts still weren't the best choice and it would be easier to write that logic as TypeScript (and have npm simply execute the script). The below example shows how this can work for the automation test scenario, making use of a "test-runner" script and re-using "build" and "start" scripts from the system under test: package.json snippet:
"test:automation": "cross-env-shell NODE_ENV='production' \"ts-node --project ./tsconfig.commonjs.json test-runner.ts\"",
tsconfig snippet:
"compilerOptions": {
	"module": "commonjs"
}
test-runner.ts:

#!/usr/bin/env ts-node
import { subProcess, subProcessSync } from 'subspawn';

// example of reading NPM config from within a script
let serverHost = 'localhost';
if (process.env.npm_config_use_browserstack === 'true') {
  serverHost = 'bs-local.com';
}

// example of overriding NPM config from within a script (to disable system's built in mock api when it's script runs)
process.env.npm_config_mock_api = 'false';

// example of setting general environment variables used by your application (override api to point at the mock)
process.env.NEXT_PUBLIC_API_HOST = `http://${serverHost}:5038`;

// example of spinning up background services and waiting for them to load
subProcess('automation-tests', 'npm run start-wiremock', true);
subProcessSync('npx wait-on tcp:5038', false);

// example of re-using scripts that exists for spinning up the system
process.chdir('../../src');

if (process.env.npm_config_skip_build !== 'true') {
  process.env.PUBLIC_URL = `http://${serverHost}:3000`;

  require('../../src/build'); // pull in the build script for the SUT
}
// start the SUT
require('../../src/start'); // pull in the start script for the SUT
process.chdir('../tests/integration');

// begin the test execution
subProcessSync('npm run execute-tests', true);

// exiting the process will also kill all the background processes
process.exit(0);

export {};

You'll notice the use of "npx wait-on" which is a handy package for testing when a dependency has become available. You'll also notice the use of "subspawn" which is an NPM package I created specifically for this use case to address the complexities of spawning, killing and integrating the stdout of child processes in Node in a cross-platform way.