Avatar

Blog (pg. 2)

  • Published on
    I like to work in pictures - and I use diagrams A LOT as a way to express the problem or solution in a digestible format. A picture is worth a thousand words, after all! In the early stages of understanding, I find it's best to stay in a diagramming tool like "draw.io" - you can quickly throw things onto a page, often in real-time while collaborating on a screen share and let the diagram drive the narrative of the conversation. The outputs of these diagrams can be easily shared for all to see and can quickly build up a dossier of useful documentation when it comes to explaining or coding the solution down the line. I tend not to worry so much at this stage about following any particular "style" of diagram, with draw.io supporting everything from swim lanes, flow charts, deployment diagrams, component diagrams and more, I usually go for whatever combination is the most useful or informative for the situation. The important measure for a diagram is that everyone, who needs to, can understand it. As you get closer to an agreed architecture, there comes a point where it's worth moving to a "model" over a "diagram" (by that I mean not just pictures, but written syntax that can be output as diagrams - i.e. "documentation as code"). What springs to mind here is PlantUML (or Mermaid diagrams) which allow you to define a plethora of diagram types using written syntax. The main benefit of this in my experience is change control, as written syntax plays very nicely with source control systems, such as Git so you can request peer reviews of documentation changes, keep it aligned with code changes and can see the history of changes with commit messages as to why things changed. As a "standard" set of diagrams to get started, I'd recommend the "C4 Model" - this will give you a good repeatable basis for capturing the system architecture at various levels of abstraction. I don't really recommend going beyond components and even then, I'd use that level of detail sparingly. However, both PlantUML and Mermaid support multiple other diagram types so it's worth having a dig through to see what you find useful. I especially like to create sequence diagrams as text, as I find that easier than doing it graphically. You can even graphically represent JSON or YAML data, create pie charts and more! I tend to categorise documentation into "specific" or "overarching" - "specific" documentation you'd find in the Git repo of the specific system it relates to, typically in a "/docs" folder alongside "/src" and here you will find the software and solution level documentation focused on the internal workings of this system, with perhaps minimal references to the external systems it interacts with (e.g. at the "context" level). Some good examples would be the lower level C4 diagrams, sequence or flow diagrams etc. Sometimes though, documentation focuses on the interaction between systems rather than a specific system, or it's documenting the business processes using mind maps etc. and isn't "about" any particular API. In that case to me that's "overarching" documentation and I'd have a dedicated repo for that genre of documentation. Unitl recently, I was using an open source tool called "C4Builder" to manage my documentation projects. It allows you to setup a structure of "md" (markdown) and "puml" (plantUML) files and then generate them into a HTML/Image output, which can be hosted directly in the Git repository. There are plugins for VS Code and JetBrains Rider that allow you to code Markdown/PlantUML with previews, while you are working on the documentation, which c4builder can then "compile". However, this tool seems to no longer be maintained so it doesn't support the latest version of PlantUML which limits what diagrams and syntax are available to you - I have created a Docker image version that monkey patches in the latest (at the time of writing) PlantUML so you can at least continue to build .c4builder projects, but I've now discovered a new way to manage my documenation: JetBrains Writerside. Writerside is still in EAP and has just announced support for PlantUML, it already supports Mermaid diagrams and has a tonne of other useful features for creating and publishing technical documentation, such as supporting OpenAPI specs and markup/markdown elements so it's well worth a look! To sum up, I recommend creating diagrams throughout the software development process, including during requirements gathering to visualise and get a collective understanding of the problem and agreement on proposed solution(s). As you move towards starting to write code, formalise the necessary (not everything) documentation into a syntactic model that is peer reviewed and source controlled. Store this either alongside the code, or in an "overarching" architecture repository. Ensure your CI/CD process automatically publishes the latest documentation when it changes. Finally, make sure the documentation you are creating has a purpose and stays up to date, if it's old or no longer serves a purpose - archive it!
  • Published on
    Makefiles are as old as time in the world of software development, but over the years their usefulness has waned due to the rise of IDEs and project files like .csproj/.sln in dotnet, or package.json in Node. However recently I've found myself wanting to use them again and here's why:
    1. I have projects that span multiple technologies (e.g. dotnet back-end, Node.js based front-end, terraform/dockerfile/helm IaC, documentation as code etc.).
      • From a single `make run` entrypoint, you can spin up multiple systems in parallel, such as calling `dotnet run` and `npm run dev`.
      • Similarly you can create a single `make test` that will run all tests across tech stacks.
      • Even with a single technology, like dotnet, often one solution can contain an API plus one or more serverless functions so the startup projects can easily adjusted.
      • Makefiles can call other makefiles, so you can create specialised "makefiles" within things like "/src/backend", "/src/frontend", "/docs", "/build", "/tests" etc. and then call them from the "uber makefile" at repo level (similar to the workspace concept)
      • Make supports bash completion, so you can quickly see what targets are supported
    2. I have dependencies that are outside of the development environment (e.g. docker compose files to spin up containers).
      • Creating a make target to spin up your Docker Compose containers means you never have to remember to do it manually before starting the project.
      • Some containers themselves have dependencies, for example passing in a .pfx file, which you can also automate the creation of when it doesn't yet exist.
      • You can call CLIs that are outside of the scope of your development technology, such as running ngrok to serve external traffic.
    3. I have a mixture of package managers/CLI tools used to build things, and want a "standard" way to build (e.g. npm, pnpm, yarn, dotnet, docker, helm).
      • When you're hopping between many different CLI tools from `dotnet`, `npm` (and the variants), `docker`, `expo`, `tf` and the rest, it's easy to forget the build syntax for the particular project - rather than putting that into a README, just put it into the makefile!
      • It's a lot easier to remember `make build` than, [p]npm/yarn/dotnet/docker/tf/helm build --Flag1 --Flag2 etc.
    4. I like that I can automatically run `npm i` whenever it needs to run (e.g. only when the package.json has changed).
      • By making the target "node_modules" depend on "package.json", Make will only run the script, e.g. `npm i`, when package.json has changed, or if node_modules doesn't exist.
    5. I can implement and test my build pipeline locally, then just call `make build` or `make test` from the build server.
      • Building for production usually at least requires calling some CLI tool with the correct arguments (e.g. release configuration, code coverage settings, environment variables etc.).
      • Making the build/test script run consistently between local and build servers make it much easier to debug.
    Of course there are other technologies at your disposal, such as using your .sln file configurations and build targets, using npm workspaces, using scripting languages like bash/powershell/JavaScript, using Dockerfiles to do everything. I've settled on makefiles being a good/standard technology stack agnostic "entry point" which can then tap into the other tooling for the specific project, even it that's just "turbo" which takes care of the rest. In my experience, 9 times out of a 10 a Git repo will end up with at least 2 different CLI tools required to build everything when you factor in source code, documentation, infrastructure as code, tests, CI/CD pipelines. If you're thinking you might give it a try there's some things to watch out for:
    1. "Make" isn't Docker, so you can't guarantee the tools you're calling in your scripts are installed.
      • This is no different to anything else, whether a README full of commands, and .sh/.ps1/.mjs file full of commands, or even a ".sln" file requires something be installed first.
      • Just be sensible to use the "common tooling" of ecosystem you expect developers to have (npm/node/dotnet/docker for example)
    2. Makefiles aren't bash, so be careful of syntax
      • They look a lot like bash, but they're not - if you want to use the shell you'd need to wrap the code in $(shell COMMAND), but be careful of cross-platform issues as it still might not be bash!
      • This means bash scripts aren't always lift-n-shift into makefiles
    3. Phony targets always run
      • Since they're not backed by a physical file, "make" can't compute if something has "changed", so it has to run them
      • This could mean you're running things unnecessarily multiple times, which isn't a problem if the underlying tool is idempotent or performant (e.g. running `docker compose up -d` when it's already running doesn't really do much)
    4. Consider cross platform usage
      • Make comes with most Linux distros, but not Windows and probably not iOS - so if you're sharing a repo with multiple OS's you need to consider if "make" is still the best tool
    Here's are a couple of example extracts of makefiles. Example 1: Shows targets for running a dotnet API, with some Azure Functions (an Event Grid handler and an Edge Api), ngrok, spinning up a Docker Compose project as well as creating a shared .pfx file mounted into a container:
    
    .PHONY: run
    .PHONY: run-api
    .PHONY: run-containers
    .PHONY: run-EventGridHandler.ExampleHandler
    .PHONY: run-ExampleEdge
    .PHONY: run-ExampleEdge-api
    .PHONY: run-ExampleEdge-ngrok
    
    # create the local dev cert that can be used by the event grid simulator
    .docker/azureEventGridSimulator.pfx:
    	dotnet dev-certs https --export-path .docker/azureEventGridSimulator.pfx --password ExamplePW
    
    run-api: run-containers
    	cd Api && dotnet run
    	
    run-EventGridHandler.ExampleHandler: run-containers
    	cd Func.EventGridHandler.ExampleHandler && func start
    		
    run-ExampleEdge:
    	@$(MAKE) -j 2 run-ExampleEdge-api run-ExampleEdge-ngrok
    
    run-ExampleEdge-api: run-containers
    	cd Func.ExampleEdge && func start --port 7073
    	
    run-ExampleEdge-ngrok:
    	ngrok http --domain=example.ngrok-free.app 7073	> /dev/null
    	
    run-containers: .docker/azureEventGridSimulator.pfx
    	docker compose up -d
    
    # run everything
    run:
    	dotnet build && \
    	make -j 3 run-api run-EventGridHandler.ExampleHandler run-ExampleEdge
    
    Developers can run `make <target>` for specific projects only, or run `make run` to spin up everything. Example 2: Shows building two front-end projects, in production mode and copying static assets to build output of Next.js
    
    .PHONY: run
    .PHONY: build
    .PHONY: build-widget
    .PHONY: build-site
    
    nextjs-website/node_modules: nextjs-website/package.json
    	cd nextjs-website && pnpm i
    
    build-site: nextjs-website/node_modules
    	cd nextjs-website && export NODE_ENV=production && pnpm run build && cp -r .next/static .next/standalone/.next/static && cp -r public .next/standalone
    
    build:
    	@$(MAKE) -j 2 build-site build-widget 
    
    javascript-widget/node_modules: javascript-widget/package.json
    	cd javascript-widget && pnpm i
    	
    javascript-widget: javascript-widget/node_modules
    	cd javascript-widget && export NODE_ENV=production && pnpm run build
    
    The build pipeline simply has to `corepack enable && make build` without worrying about correctly assembling the outputs. In summary, "make" provides a convenient way to package up "commands" as "targets" and then create a dependency graph amongst those targets, such that running a single command can daisy chain together all of the pre-requisites. It's mainly suited to Linux/Unix so can be very useful to create "CI/CD provider agnostic" build scripts (i.e. not using provider tasks) that can then be called from the provider yaml pipelines and/or can be used locally to execute the build steps or to spin up multi-faceted solutions easily.
  • Published on
    In order to make deployments to Vercel fully repeatable and automatable I wanted the entire process to be encapsulated in the build process and happen using the CLI, rather than requiring some of the work having to be done manually through the Vercel UI, e.g. setting up projects and adding sensible defaults for environment variables etc.). Some of the steps that I wanted to be able to automate are:
    1. Create the "Project" in Vercel
    2. Deploy an application (from within a monorepo)
    3. Configure the project as part of the deployment, to avoid having to configure it through the UI
    4. Configure the deployed application environment variables with sensible defaults from .env
    5. Layer on environment specific environment variables (similar to how Helm ".values" files work)
    To accomplish this, I created a makefile which can be used in conjunction with some config files, to perform the transformations and CLI operations to automate the deployment. The initial folder structure looks like this: / <-- monorepo root /apps/exampleApp/* <-- Next.js application /apps/exampleApp/.env <-- Next.js application default variables /build/deploy/makefile <-- deployment commands /build/deploy/dev.env.json <-- development environment specific variables /build/deploy/vercel.json <-- Vercel project configurations /build/deploy/token <-- Vercel CLI token, this should be swapped in from secrets /packages/* <-- other npm projects in the monorepo The makefile contents looks as follows:
    
    .PHONY: create
    .PHONY: deploy-dev
    
    MAKEFILE_PATH:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
    ROOT_PATH:=$(shell realpath "${MAKEFILE_PATH}/../..")
    
    create:
    	npx vercel project add YOUR_PROJECT_NAME --token "$(shell cat token)"
    
    ../../.vercel/project.json: 
    	npx vercel link --cwd ${ROOT_PATH} -p YOUR_PROJECT_NAME --yes --token "$(shell cat token)"
    
    ../../.vercel/base-env.json: ../../apps/exampleApp/.env
    	cat ../../apps/exampleApp/.env | \
    	jq -Rn '{ "env": [inputs | select(length > 0 and (startswith("#") | not)) | capture("(?<key>.+?)=(?<value>.+)") | {(.key): .value }] | add }' \
    	> ../../.vercel/base-env.json
    
    ../../.vercel/dev-local-config.json: vercel.json dev.env.json ../../.vercel/base-env.json
    	jq -s '.[0] * .[1] * .[2]' vercel.json ../../.vercel/base-env.json dev.env.json \
    	> ../../.vercel/dev-local-config.json
    
    deploy-dev: ../../.vercel/project.json ../../.vercel/dev-local-config.json
    	npx vercel deploy --cwd ${ROOT_PATH} --token "$(shell cat token)" --local-config ../../.vercel/dev-local-config.json
    
    clean:
    	rm -rf ../../.vercel
    
    Essentially what you have is one command "create" to create the remote project in Vercel and one command "deploy-dev" to deploy the application using the development variables. All the other files are used to generate a custom configuration for the deploy step. The other significant files are: vercel.json - this is where you can configure the project settings of Vercel.
    
    {
    	"framework": "nextjs",
    	"outputDirectory": "apps/exampleApp/.next",
    	"env": {
    		"EXAMPLE_SETTING": "some_value"
    	}
    }
    
    dev.env.json - just the environment section for "dev" deployments, e.g.
    
    {
        "env": {
            "EXAMPLE_SETTING_A": "dev.specific.value",
        }
    }
    
    The contents of your typical .env file might look like this:
    EXAMPLE_SETTING_A="default.value"
    EXAMPLE_SETTING_B="another one"
    
    You will notice that the makefile also makes reference to several files in the .vercel folder, this folder is transient and is created by "vercel link" -- it isn't checked in to Git, but here's a description of what the files do: /.vercel <-- Created by "Vercel Link", this is not committed in Git /.vercel/project.json <-- Created by "Vercel Link" /.vercel/base-env.json <-- sensible defaults, created from .env by the makefile which replicate whatever is in .env for the app /.vercel/dev-local-config.json <-- the combined configuration values created by the makefile for the project + dev variables to be used on the CLI In the above example, the base-env.json would look like this:
    
    {
        "env": {
            "EXAMPLE_SETTING_A": "default.value",
            "EXAMPLE_SETTING_B": "another one",
        }
    }
    
    The dev-local-config.json would look like:
    
    {
    	"framework": "nextjs",
    	"outputDirectory": "apps/exampleApp/.next",
    	"env": {
    		"EXAMPLE_SETTING": "some_value",
                    "EXAMPLE_SETTING_A": "dev.specific.value",
                    "EXAMPLE_SETTING_B": "another one"
    	}
    }
    
    So you can see, that the final configuration sent to Vercel for the "deploy-dev" step configures the project as Next.js, configures the location of the build asset and has a 3 way combined "env" section from "vercel.json" + ".env" + "dev.env.json" With this starting point you could now add more environments simply by having additional "*.env.json" files and replicating the makefile step to generate and use that config.
  • Published on
    Containerising a Next.js application that uses SSG (where the content is different per environment) isn't currently that great of an experience. The Vercel docs suggest that you would simply build a container image targeting a certain environment, but that doesn't sit well with the usual approach to containerisation and route to live process of many software teams. In this blog post I'll dig into the problem with some suggestions on how you can solve it.

    What is SSG?

    SSG (static site generation) is a feature of Next.js that allows you to pre-compute the content of pages, based on content from a headless CMS, at BUILD TIME so that your website doesn’t need to communicate with the CMS on a per-request basis. This improves website performance due to having pre-computed responses ready to go and reduces load on the CMS server. SSG can be broken down into two categories, based on whether the route is dynamic or fixed, e.g. /page.tsx ← fixed route, can contain SSG generated content /pages/[...page.tsx] ← dynamic route (includes slug), build time SSG pages defined by getStaticPaths

    What is ISR?

    ISR (incremental static regeneration) is a feature of Next.js that, given an existing SSG cached page, will at RUN TIME, go and rebuild/refresh that cache with the latest content from the CMS, so that SSG pages do not become stale. This gives you the benefits of a “static” website, but still means the underlying content can be editable without rebuilding/redeploying the site.

    What about environment variables?

    Typically when you’re building an app in JavaScript you use “process.env.XYZ” in the code so that the value is substituted with an environment variable. For code that runs on the server, the value is substituted by the Node process in real-time. For code that runs on the client, the value is swapped in for the literal at BUILD TIME by the compiler.

    Sounds great, what’s the problem?

    The problem stems from SSG and client-side environment variables being a “build time” computation. In order to build a Docker image that is “ready to start”, you’d need to:
    • Be happy that you’re building an image targeting a specific environment
      • A container image would not be able to be deployed to any other environment than the one it was created for
    • Be happy that you’re baking in a certain set of client-side environment variables
      • Changing environment variables and restarting the container image has no effect on client-side code, you’d need to rebuild from source.
    • Be happy that you’re baking in a “point-in-time” cache of the content
      • This would get more and more stale as time goes by (ISR kind of solves this issue but only when it kicks in (e.g. after 5 secs) and the delta would keep increasing)
    • Have connectivity to the target environment CMS API to get content during the build
      • In order to get the content at build time, you’d need network connectivity between your build agent and whatever private network is hosting your content/CMS server.
    None of the above makes for a good 12 factor app and is not consistent with the usual approach to containerisation (being that the same image can be configured/deployed many times in different ways).

    What is the solution?

    Environment Variables For environment variables, luckily there is an easy solution known as “Runtime Configuration” - this essentially keeps the “process.env” parts of the code on the server and the client-side gets access to the config by calling a react hook and using the configuration object. This has been deprecated, read about your options here: https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables#runtime-environment-variables SSG/ISR ISR we get just by including “revalidate” in the getStaticProps return value. This means content is refreshed on the next request after every n seconds (configured per page). However the “initial content” is still whatever was included at the SSG/build stage. You can also forcefully update the content cache using "on demand revalidation" by creating an API endpoint that calls the revalidate method. Data Fetching: Incremental Static Regeneration | Next.js For SSG (that don’t use dynamic routes) there is no simple solution, having build time connectivity to the CMS and building images specific to a single environment is out of the question, so there is an alternative approach.
    I just want SSG, even if it means deferring the "next build" During the Dockerfile “build” phase, we could omit the "next build" step; a better option is to spin up a mocked “CMS" to allow us to perform the “next build”, which at least ensures the code will compile as well as warming up the “.next” cache of transpiled output (even though the content cache is still empty). We can set the image entrypoint as “npm run build && npm run start” so that wherever the container finds itself being spun up, it will re-build the baked in code files with the environment variables provided and will connect to the configured CMS to generate it's cache. If you're using "readiness" checks in Kubernetes, the pod won't come online until the cache has been generated. The pros of this approach; it’s very simple to reason about, it follows usual Docker paradigms with regards to configurability and deployability and takes care of client side environment variables without the need for using runtime configuration (simpler DX). The cons are; since the production image needs to build the source on startup you need to include the source code files and the package “devDependencies” so that this can occur. It also means the container startup is slower than if it were pre-built.

    Alternative considerations:

    A) Scrap SSG - Use SSR+CDN caching Since environment variables are easily solved through runtime configuration, we could solve that problem that way and replace SSG with SSR+CDN for caching. Pros: Removes the requirement for the “build on startup”, which will make the container images smaller and faster to boot. Cons: Relies on external CDN tooling, which would require a different approach for warming up and refreshing caches. Not ideal that we must forfeit SSG. B) Vercel could offer “runtime only SSG” (basically ISR but with an empty starting point) If Vercel ever create a feature whereby SSG can become a “run time” rather than “build time” operation, then we should switch to using that, in combination with runtime configuration for environment variables. Pros: Can use a multistage Dockerfile to split “build time” from “run time” dependencies, so container image will be smaller. More 12FA compliant as the build artifact is compiled code only. Cons: Still has a slower "fully ready and cached" startup time when compared to build time SSG, due to SSG/ISR only kicking in after the container has started (although with file level caching with a network share this would only be applicable to the first container instance). Footnote on option B) This is already possible for “dynamic routes” (i.e. routes that use a slug and would export getStaticPaths). In this case you can return an empty array of paths and fallback mode “blocking”, so that during the build there are no SSG pages found, but at runtime any URLs requested would act like SSR on first request and then be cached. You can populate the cache by triggering the “on demand revalidation” at runtime (using the actual paths you’d like to generate), by calling an API endpoint you have created for this purpose. This is hinted at on Vercel’s website here Data Fetching: getStaticPaths | Next.js NB. With all of the above, when using SSG/ISR, if multiple instances of the site are running and it’s absolutely vital that the content is the same across all instances, then you should use a network share with file caching, as noted here: Data Fetching: Incremental Static Regeneration | Next.js

    Conclusion

    In conclusion, if you want to use Next.js with Docker:
    • No SSG? Use "runtime configuration" for environment variables and follow the usual multi-stage build approach in the Dockerfile
    • SSG for dynamic paths only? Use "runtime configuration" for environment variables, use "empty paths, fallback blocking" trick to skip SSG on build, then use "on demand revalidation" after the container starts to populate the cache
    • SSG for fixed routes? Embed a "build" step into the container startup
    As mentioned, if Vercel release a feature to enable "run time only SSG" this would become the best option for all. UPDATE: Since the time of writing the new App Router has become the recommended approach to structuring your Next.js code, so getStaticProps and getServerSideProps are gone, but many of the same concepts apply. There's an on-going discussion on GitHub that may be of interest here: Ability to skip static pages generating at build time #46544 And the way to opt-out of SSG and use SSR is now possible using this: Route Segment Config
  • Published on
    NPM scripts are often used to abbreviate longer commands with many arguments or to sequence multiple commands that are required to compile or run a Node based system. Sometimes they can get complex; if you have multiple moving parts to consider, such as conditionally setting environment variables, conditional compilation steps or multiple sequential and parallel stages of the build. This is especially true if you trying to write a startup script for automation tests, which probably need to spin up mocked APIs, build and start the system under test, wait for startup to complete then execute tests. It is equally as useful when you have multiple configurations of the application and it's supporting local development services (such as running a mock, changing an environment flag etc.) This can be achieved using daisy chained NPM scripts and helper packages such as "npm-run-all", "concurrently", "start-server-and-test", but I found that this doesn't scale well if you have multiple options for developers to choose from when running the app. A contrived example of some options you might want to give developers: - run in "development" or "production" mode (usually determined by running either "npm run dev" or "npm run build/start") - setting environment variables based on the mode (e.g. you might have used "cross-env-shell" in your npm script) - start a mocked api, or don't (maybe you'd spin this up asynchronously using "npm-run-all" or "concurrently") - build for "flavour 1" or "flavour 2" of the code (say, for example, you can choose whether to compile for REST or GraphQL at build time) You might also have some automation tests that need to: - start a mocked api (and wait for it to load) - build and run the system under test in production mode (and wait for it to load) - run the tests (in local or browserstack mode) - kill all the child processes Aside from the complexities of describing all of the above in NPM scripts, it gets very repetitive which each variation. Even with the limited choices described above you get the Cartesian product of all combinations, represented as scripts e.g.: "development mode", "with mock", "GraphQL" "development mode", "with mock", "REST" "development mode", "no mock", "GraphQL" "development mode", "no mock", "REST" "production mode", "with mock", "GraphQL" "production mode", "with mock", "REST" "production mode", "no mock", "GraphQL" "production mode", "no mock", "REST" You can remove one level of complexity by using NPM configuration settings, to switch between modes, that live in your .npmrc file such as:
    mock-api='true'
    api-mode='REST'
    use-browserstack='false'
    
    And then only having the usual 3 scripts in your package.json (for example using Next.js) that take into account the config settings: "dev", "build", "start" By using settings in the .npmrc file we get all the power of NPM configuration, which means by default it will use the values defined in .npmrc, but these can be overridden with environment variables (so maybe you'd set different defaults in your Dockerfile or build pipeline, than what local devs might use), or can be override with CLI arguments (so maybe you'd do this when spinning up a system from an automation test suite). The next complexity to solve is how to interpret the NPM configuration settings, such that the right build steps are executed and are sequential or parallel in nature accordingly. This where I decided that NPM scripts still weren't the best choice and it would be easier to write that logic as TypeScript (and have npm simply execute the script). The below example shows how this can work for the automation test scenario, making use of a "test-runner" script and re-using "build" and "start" scripts from the system under test: package.json snippet:
    "test:automation": "cross-env-shell NODE_ENV='production' \"ts-node --project ./tsconfig.commonjs.json test-runner.ts\"",
    
    tsconfig snippet:
    "compilerOptions": {
    	"module": "commonjs"
    }
    
    test-runner.ts:
    
    #!/usr/bin/env ts-node
    import { subProcess, subProcessSync } from 'subspawn';
    
    // example of reading NPM config from within a script
    let serverHost = 'localhost';
    if (process.env.npm_config_use_browserstack === 'true') {
      serverHost = 'bs-local.com';
    }
    
    // example of overriding NPM config from within a script (to disable system's built in mock api when it's script runs)
    process.env.npm_config_mock_api = 'false';
    
    // example of setting general environment variables used by your application (override api to point at the mock)
    process.env.NEXT_PUBLIC_API_HOST = `http://${serverHost}:5038`;
    
    // example of spinning up background services and waiting for them to load
    subProcess('automation-tests', 'npm run start-wiremock', true);
    subProcessSync('npx wait-on tcp:5038', false);
    
    // example of re-using scripts that exists for spinning up the system
    process.chdir('../../src');
    
    if (process.env.npm_config_skip_build !== 'true') {
      process.env.PUBLIC_URL = `http://${serverHost}:3000`;
    
      require('../../src/build'); // pull in the build script for the SUT
    }
    // start the SUT
    require('../../src/start'); // pull in the start script for the SUT
    process.chdir('../tests/integration');
    
    // begin the test execution
    subProcessSync('npm run execute-tests', true);
    
    // exiting the process will also kill all the background processes
    process.exit(0);
    
    export {};
    
    
    You'll notice the use of "npx wait-on" which is a handy package for testing when a dependency has become available. You'll also notice the use of "subspawn" which is an NPM package I created specifically for this use case to address the complexities of spawning, killing and integrating the stdout of child processes in Node in a cross-platform way.