• Published on
    If you are building integration tests for ASP.NET WebAPIs using Microsoft's WebApplicationFactory and your API is secured with JWT bearer authentication, then at some point you'll probably want to mock the JWT authentication mechanism. Of course, you always have the option of including the "real" JWT issuer in the scope of your tests, but it can get quite tricky to automate the security checks of real IDPs! The first step is to create a class that will handle issuing the "fake" JWT tokens:
    public static class MockJwtTokens
        public static string Issuer { get; } = Guid.NewGuid().ToString(); // random issuer
        public static SecurityKey SecurityKey { get; }
        public static SigningCredentials SigningCredentials { get; }
        private static readonly JwtSecurityTokenHandler TokenHandler = new();
        private static readonly RandomNumberGenerator Rng = RandomNumberGenerator.Create();
        private static readonly byte[] Key = new byte[32];
        static MockJwtTokens()
            SecurityKey = new SymmetricSecurityKey(Key) { KeyId = Guid.NewGuid().ToString() };
            SigningCredentials = new SigningCredentials(SecurityKey, SecurityAlgorithms.HmacSha256);
        public static string GenerateJwtToken(IEnumerable<Claim> claims)
            return TokenHandler.WriteToken(new JwtSecurityToken(Issuer, "YOUR-EXPECTED-AUDIENCE", claims, null, DateTime.UtcNow.AddMinutes(20), SigningCredentials));
    NOTE: If you validate the "audience" field in your system, then ensure the audience is set as you'd expect. The 2nd step is to replace the configuration of the SUT (system under test) to start trusting your "fake" issuer instead of the real one:
    public class WebTestFixture() : WebApplicationFactory<Program>
        protected override void ConfigureWebHost(IWebHostBuilder builder)
            builder.ConfigureTestServices(services =>
                services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, options =>
                    var config = new OpenIdConnectConfiguration()
                        Issuer = MockJwtTokens.Issuer
                    options.Configuration = config;
    With that in place, you simply need to add the token with claims of your choice to your fixture's HttpClient. For example to add a token with an "email" claim:
    // extension method for adding an email claim JWT
    public static HttpClient WithUserCredentials(this HttpClient client, string jwtEmail)
        if (string.IsNullOrEmpty(jwtEmail))
            return client;
        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer",
                new Claim("email", jwtEmail)
        return client;
    // example usage
    using var client = fixture.CreateClient().WithUserCredentials(theEmail);
    Using extension methods is optional, structure it however you like and include as many claims as is required for your test scenario.
  • Published on
    When you're building .NET APIs you're comfortably surrounded by tooling for generating data and operation contract documentation automatically for you. The combination of SwaggerGen and SwaggerUI can automatically present consumers of your API with everything they need to understand the interface and to scaffold clients/DTOs - all based on the generated OpenAPI specification file, without much work as the developer. However when you're building something that isn't an "API" in the sense of "HTTP Request/Response", such as building serverless functions that process incoming messages you lose an awful lot of that tooling. In some eventing systems (such as Kafka) you have a schema registry, so you can use that to enforce validity and versioning of message data contracts for producers and consumers of that data (for example with an Avro schema). For simpler setups with no schema registry, it's still nice to have automatically generated documentation and schemas based on your source code. The reason I suggest generating your documentation for source is it ensures correctness - there's only one thing worse than missing documentation and that's wrong documentation. Also assuming you're versioning your software you'll have corresponding versioned documentation that sits alongside. The two goals of my approach are:
    • Produce JSON schema files for machine consumption
    • Produce Markdown files for human consumption

    JSON Schema

    Using JSON schemas are a great way to share data contracts in a language agnostic way, so consumers in any language can scaffold types from your schemas. They can also be used to validate incoming JSON before de-serializing it. Not only that but they plug into the OpenAPI specification, so you can reference them in your spec file (for example if you're exposing the data contract via an API Gateway endpoint). You can easily generate a JSON schema for a C# type using NJsonSchema, for example:
    var typesToGenerate = GetTypesToGenerate(); // todo: implement creating the list of types, e.g. using typeof(MyDotnetModels.SomeType).Assembly.GetTypes()
    if (!Directory.Exists("./generated"))
    foreach (var type in typesToGenerate)
        var schema = JsonSchema.FromType(type);
        // add required to all non-nullable props
        foreach (var propertyInfo in type.GetProperties(BindingFlags.Public | BindingFlags.Instance))
            var isNullable = Nullable.GetUnderlyingType(propertyInfo.PropertyType) != null;
            if (!isNullable)
        var schemaJson = schema.ToJson();
        File.WriteAllText($"./generated/{type.Name}.schema.json", schemaJson);
    Build that code snippet into a console app (e.g. called JsonSchemaGen) and you can now execute it whenever your build your source code for deployment and it will generate JSON schema files in the bin/generated folder.


    Now that we have JSON schemas, it's easy to generate markdown files using @adobe/jsonschema2md. Simply pass the location of schema files with any configurations of your choice, e.g.
    npx -y @adobe/jsonschema2md -h false \
    	-d [path_to_schema_files] \
    	-o markdown -x -
    That will generated a README.md and md files for each schema, with human readable content describing your data contracts.

    Using a makfile to combine it all

    This part is optional, but it's nice to have the commands necessary to perform the above steps checked in to source control and to use the power of "make" to run the necessary steps only when things change, e.g.
    .PHONY: clean build
    schemas: [relative_path_to_dotnet_source]/MyDotnetModels
    	cd JsonSchemaGen && \
    	dotnet build -c Release && \
    	cd bin/Release/net8.0 && ./JsonSchemaGen && \
    	mkdir -p ../../../../schemas && mv generated/* ../../../../schemas/
    markdown: schemas
    	npx -y @adobe/jsonschema2md -h false \
    	-d ./schemas \
    	-o markdown -x -
    build: markdown
    	rm -rf schemas markdown
    Summary of the steps:
    • schemas - depends on the dotnet models source code directory, so this runs whenever any file in that directory changes
      • Build the console app (including a reference to models) in release mode
      • Execute the built console app
      • Moves the generated files in another folder to make them easier to find
    • markdown - depends on the schemas, so this runs whenever the schemas change
      • Use npx to execute the @adobe/jsonschema2md and output to a directory called 'markdown'
    You can now incorporate "make build" of your documentation makefile into your CI process and store the "markdown" and "schemas" directories as build artifacts, alongside your system build artifacts. Now they are ready to be shipped with the system when it's released (e.g. put in a storage account, or hosted as a static website - for example).
  • Published on
    When building CI/CD pipelines it is often the case that you'd like to "tokenize" a configuration file, so that the values in the file can be calculated during the build/release execution process. This can be for all kinds of reasons, such as ingesting environment variables or by basing values on the outputs of other CLI tools, such as terraform (e.g. reading IDs of recently created infrastructure). Rather than separating the key/value "tokens" from the "token calculation" logic, I wanted a way to embed Bash scripts directly into my JSON file and then have interpret the values into an output file. For example, the JSON file might look like:
            "Key": "verbatim",
            "Value": "just a string"
            "Key": "environment-variable",
            "Value": "$HOME"
            "Key": "shell-script",
            "Value": "$(dotnet --version)"
    With the invokable tokens defined, the below script can be run against the JSON file in order to parse and execute the tokens:
    echo "[" > "$tempFile"
    itemCount=$(jq '. | length' $jsonFile)
    # Read and process the JSON file line by line.
    jq -c '.[]' $jsonFile | while read -r obj; do
        key=$(echo "$obj" | jq -r '.Key')
        value=$(echo "$obj" | jq -r '.Value')
        # Check if value needs command execution
        if [[ "$value" == \$\(* ]]; then
            # Remove $() for command execution
            command=$(echo "$value" | sed 's/^\$\((.*)\)$/\1/')
            newValue=$(eval $command)
        elif [[ "$value" == \$* ]]; then
            # It's an environment variable
            varName=${value:1} # Remove leading $
            # Plain text, no change needed
        # Update the JSON object with the new value
        updatedObj=$(echo "$obj" | jq --arg newValue "$newValue" '.Value = $newValue')
        # Append the updated object to the temp file
        echo "$updatedObj" >> "$tempFile"
        # Add a comma except for the last item
        if [[ $currentIndex -lt $itemCount ]]; then
            echo ',' >> "$tempFile"
    echo "]" >> "$tempFile"
    Running the command looks like
    ./json_exec.sh example.json example.out.json
    And the output then looks as follows:
      "Key": "verbatim",
      "Value": "just a string"
      "Key": "environment-variable",
      "Value": "/home/craig"
      "Key": "shell-script",
      "Value": "8.0.202"
  • Published on
    Imagine you want to create a very generic SpecFlow step definition that can be used to verify that a certain HttpMessageRequest was sent by your system that uses HttpClient. You want to check your system calls the expected endpoint, with the expected HTTP method and that the body data is as expected. The gherkin syntax for the method might be something like:
    Then the system should call 'POST' on the 3rd party 'hello-world' endpoint, with the below data
      | myBodyParam1 | myBodyParam2 |
      | Hello        | World        |
    C# being a strongly typed language, it's actually not that straightforward to make a robust comparison of the JSON that was sent in a request, with a Table that is supplied to SpecFlow. However, I did manage to come up with such a way, which is documented below.
    [Then(@"the system should call '(.*)' on the 3rd party '(.*)' endpoint, with the below data")]
        public void ThenTheSystemShouldCallOnThe3rdPartyEndpointWithTheBelowData(HttpMethod httpMethod, string endpointName,
            Table table)
            var expectedRequest = table.CreateDynamicInstance();
              async message => message.RequestUri!.AbsoluteUri.EndsWith(endpointName) &&
                await FluentVerifier.VerifyFluentAssertion(async () =>
                    await message.Content!.ReadAsStringAsync(),
    There's several parts to the magic, in order:
    1. `table.CreateDynamicInstance` - this extension comes from the SpecFlow.Assist.Dynamic NuGet package, which allows you to create an anonymous type instance from a SpecFlow table.
    2. `_mock.VerifyRequest` - this extension comes from the Moq.Contrib.HttpClient, which isn't strictly necessary but is a nice way to manage your HttpClient's mocked message handler and make assertions on it.
    3. `await FluentVerifier.VerifyFluentAssertion` - uses this trick for making FluentAssertions inside of a Moq Verify call (so you can use equivalency checks rather than equality).
    4. `JsonConvert.DeserializeAnonymousType` - allows you to deserialize JSON to an anonymous type based on a donor "shape" (which we get from the "expected" anonymous type)
  • Published on
    Cypress is a great choice of tool for writing e2e UI automation tests for web applications. One of things you'll invariably want to do when writing tests is stub the dependencies of your system, so that you can isolate the SUT and allow for performant test runs. If you're testing a purely client side code-base, like React, then the built-in cy.intercept might do the job. This will intercept calls made from the browser to downstream APIs and allows you to stub those calls before it leaves the client. However for a Next.js application that includes server side data fetching (e.g. during SSR) or where you have implemented Next.js APIs (e.g. for "backend for frontend" / "edge" APIs) that you want to include as part of the "system under test", you need another option. The way the Cypress Network Requests documentation reads, it seems like the only choices are mocking in the browser using cy.intercept, or spinning up your entire dependency tree - but there is a 3rd option - mocking on the server. Mocking server side calls isn't a new paradigm if you're used to automation testing C# code or have used other UI testing frameworks so I won't go into major detail on the topic, but I wanted to write this article particularly for Cypress as the way you interact with the external mocks is different in Cypress. To mock a downstream API, you can spin up stub servers that allow you to interact with them remotely such as WireMock or in this case "mockserver". What you need to achieve will comprise of these steps:
    1. Before the test run - spin up the mock server on the same address/port as the "real" server would run (actually you can change configs too by using a custom environment, but to keep it simple let's just use the default dev setup)
    2. Before the test run - spin up the system under test
    3. Before an "act" (i.e. during "arrange") you want to setup specific stubs on the mock server for your test to use
    4. During an "assert" you might want to verify how the mock server was called
    5. At the end of a test run, stop the mock server to free up the port
    In order to orchestrate spinning up the mock server and the SUT, I'd recommend scripting the test execution - which you can read more about here - below shows an example script to achieve this: test-runner.mjs
    #!/usr/bin/env node
    import { subProcess, subProcessSync } from 'subspawn';
    import waitOn from 'wait-on';
    const cwd = process.cwd();
    // automatically start the mock
    subProcess('test-runner', 'npm run start-mock', true);
    await waitOn({ resources: ['tcp:localhost:5287'], log: true }, undefined);
    // start the SUT
    subProcess('test-runner', 'make run', true);
    await waitOn({ resources: ['http://localhost:3000'] }, undefined);
    // run the tests
    subProcessSync("npm run cy:run", true);
    // automatically stop the mock
    subProcess('test-runner', 'npm run stop-mock', true);
    That suits full test runs and CI builds, but if you're just running one test at a time from your IDE you might want to manually start and stop the mock server from the command line, which you can do by running the "start-mock" and "stop-mock" scripts from the CLI, hence why they have been split out. start-mock.js
    const mockServer = require('mockserver-node');
    mockServer.start_mockserver({ serverPort: 5287, verbose: true })
      .then(() => {
        console.log('Mock server started on port 5287');
      .catch(err => {
        console.error('Failed to start mock server:', err);
    const mockServer = require('mockserver-node');
    mockServer.stop_mockserver({ serverPort: 5287 })
      .then(() => {
        console.log('Mock server stopped');
      .catch(err => {
        console.error('Failed to stop mock server:', err);
    "scripts": {
        "start-mock": "node start-mock.js",
        "stop-mock": "node stop-mock.js",
        "test": "node test-runner.mjs",
        "cy:run": "cypress run"
    With the mock server and SUT running you can now interact with them during your test run, however in Cypress the way to achieve this is using custom tasks. Below shows an example task file that allows you to create and verify mocks against the mock-server: mockServerTasks.js
    const { mockServerClient } = require('mockserver-client');
    const verifyRequest = async ({ method, path, body, times = 1 }) => {
      try {
        await mockServerClient('localhost', 5287).verify({
          method: method,
          path: path,
          body: {
            type: 'JSON',
            json: JSON.stringify(body),
            matchType: 'STRICT'
        }, times);
        return { verified: true };
      } catch (error) {
        console.error('Verification failed:', error);
        return { verified: false, error: error.message };
    const setupResponse = ({ path, body, statusCode }) => {
      return mockServerClient('localhost', 5287).mockSimpleResponse(path, body, statusCode);
    module.exports = { verifyRequest, setupResponse };
    This can then be imported into your Cypress config:
    const { defineConfig } = require("cypress");
    const { verifyRequest, setupResponse } = require('./cypress/tasks/mockServerTasks');
    module.exports = defineConfig({
      e2e: {
        setupNodeEvents(on, config) {
          on('task', { verifyRequest, setupResponse })
        baseUrl: 'http://localhost:3000'
    Finally, with the custom tasks registered with Cypress, you can use the mock server in your tests, e.g.:
    it('correctly calls downstream API', () => {
        // setup API mock
        cy.task('setupResponse', { path: '/example', body: { example: 'response' }, statusCode: 200 }).should('not.be.undefined');
        // submit the form (custom command that triggers the behaviour we are testing)
        cy.contains('Example form submitted successfully!').should('be.visible');
        // verify API call
        cy.task('verifyRequest', {
          method: 'POST',
          path: '/example',
          body: {
             example: 'request'
          times: 1
        }).should((result) => {
          expect(result.verified, result.error).to.be.true;