Avatar

Blog

  • Published on
    When writing BDD tests (for example using Reqnroll, the successor of SpecFlow) you don't want to "bloat" your tests with irrelevant data, such as GUIDs of IDs - or really, any data that doesn't play a direct role in the success or failure of the test case! For example:
    Given a user registers for an account using the below profile information
    | Email            | Firstname | Surname |
    | user@domain.com  | Hello     | World   |
    When a request is made by "user@domain.com" to retreive their profile information
    Then the below profile information is returned
    | Email            | Firstname | Surname |
    | user@domain.com  | Hello     | World   |
    
    Under the hood, your user profile DTO might look something like this:
    
    public class UserProfile {
        public Guid Id { get; set; }
        public string Email { get; set; }
        public string Firstname { get; set; }
        public string Surname { get; set; }
        public DateTime LastUpdated { get;set; }
    }
    
    Depending on your API, you might need to "generate" some values for your request object too - I won't cover that here as there are several ways to achieve it (e.g. using AutoFixture to fill in the missing properties, custom builders, merging object instances etc.). When it comes to assertions; whether you're using FluentAssertions, or free alternatives like DeepEqual - when comparing the "expected" and "actual" objects, you only want to include the properties for the data that was defined in the test. Your test step definition might look something like this:
    
    [Then(@"the below profile information is returned")]
    public async Task ThenTheBelowProfileInformationIsReturned(Table table)
    {
        var expected = table.CreateInstance<UserProfile>();
    
        var actual = await _lastResponse.Content.ReadFromJsonAsync<UserProfile>();
    
        actual.WithDeepEqual(expected)
            .IgnoreUnmatchedProperties()
            .IgnoreProperty(p => !table.Header.Contains(p.Name))
            .Assert();
    }
    
    The above will work for collections too when you're comparing the entire set. In some cases you might be checking if a collection "contains" an object that "partially matches" another, for example:
    Given the system already contains several recent users
    And a user registers for an account using the below profile information
    | Email            | Firstname | Surname |
    | user@domain.com  | Hello     | World   |
    When the new users report is generated
    Then the below profiles are included
    | Email            | Firstname | Surname |
    | user@domain.com  | Hello     | World   |
    
    In which case you can combine partial matcher with a "contains" (or check the entire sequence if it must only contain) assertion, e.g.:
    
    [Then(@"the below profiles are included")]
    public async Task ThenTheBelowProfilesAreIncluded(Table table)
    {
        var expected = table.CreateSet<UserProfile>().ToList();
    
        var actual = await _lastResponse.Content.ReadFromJsonAsync<List<UserProfile>>();
    
        actual.ShouldContain(a => expected.Any(e => a.WithDeepEqual(e)
            .IgnoreUnmatchedProperties()
            .IgnoreProperty(p => !table.Header.Contains(p.Name))
            .Compare()));
    }
    
    Extending this to support "inline complex properties". When you have "nested" objects in BDD you have a few options available to interrogate the "child" objects, such as splitting out multiple step definitions or somehow "stringify" the complex type into the parent row. Let's assume you opt for the latter and choose JSON as your "stringification" algorithm, you might write a test like the below:
    Given the system already contains several recent users
    And a user registers for an account using the below profile information
    | Email            | Firstname | Surname | Address                                                                     |
    | user@domain.com  | Hello     | World   | { "Street": "123 Test Street", "City": "Testtown", "Postcode": "TE57 7WN" } |
    When the new users report is generated
    Then the below profiles are included
    | Email            | Firstname | Surname | Address                                                                     |
    | user@domain.com  | Hello     | World   | { "Street": "123 Test Street", "City": "Testtown", "Postcode": "TE57 7WN" } |
    
    Using the same "UserProfile" object as before, but with an added "Address" property with the below structure:
    
    public class Address {
        public string Street { get; set; }
        public string City { get; set; }
        public string Postcode { get; set; }
        public bool IsVerified { get; set; }
    }
    
    In this example there's an extra "IsVerified" property that does not partake in the test. In order to A) build the expectation and B) partially match the child, you can write the below:
    
    [Then(@"the below profile information is returned")]
    public async Task ThenTheBelowProfileInformationIsReturned(Table table)
    {
        var expected = table.CreateSet<UserProfile>().ToList();
        var currentAddressProps = new HashSet<string>();
    
        if (table.ContainsColumn("Address"))
        {
            for (var i = 0; i < table.Rows.Count; i++)
            {
                var row = table.Rows[i];
                if (!string.IsNullOrWhiteSpace(row["Address"]) && row["Address"] != "<null>")
                {
                    using var jsonDoc = JsonDocument.Parse(row["Address"]);
                    jsonDoc.RootElement.EnumerateObject().Select(p => p.Name.ToPascalCase()).ForEach(p => currentAddressProps.Add(p));
                    
                    expected[i].Address = jsonDoc.Deserialize<Address>();
                }
            }
        }
    
        var actual = await _lastResponse.Content.ReadFromJsonAsync<List<UserProfile>>();
    
        actual.ShouldContain(a => expected.Any(e => a.WithDeepEqual(e)
            .IgnoreUnmatchedProperties()
            .IgnoreProperty(p =>
                p.DeclaringType == typeof(UserProfile) && !table.Header.Contains(p.Name) ||
                p.DeclaringType == typeof(Address) && !currentAddressProps.Contains(p.Name))
            .Compare()));
    }
    
  • Published on
    A basic template to follow when creating an AWS Lambda function in dotnet/C# to add IConfiguration (from files and environment), create the DI container with basic AWS/Lambda integration wired up. Function.cs (below example shows SQS integration, change accordingly):
    
    [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
    
    namespace YourNS;
    
    public class Function()
    {
        private static readonly IServiceProvider ServiceProvider = new Startup().BuildServiceProvider();
    
        public async Task FunctionHandler(SQSEvent evnt, ILambdaContext _)
        {
            using var scope = ServiceProvider.CreateScope();
    
            var processor = scope.ServiceProvider.GetRequiredService<IProcessor>();
            await processor.ProcessAsync(evnt);
        }
    }
    
    The service provider is stored as static, which provides the closest thing to a "Singleton" scope for your DI services. When AWS re-uses a lambda instance you'll get some benefit to keeping your singletons alive. The creation of a "scope" at the top of the dependency tree for processing the requests adds the ability to use scoped/transient DI services "per request" (for SQS you might also choose to create a scope for each Record in the event, if you want more granularity). Startup.cs:
    
    public class Startup
    {
        public IConfiguration Configuration { get; }
    
        public Startup()
        {
            var currentDirectory = Directory.GetCurrentDirectory();
            var environment = Environment.GetEnvironmentVariable("DOTNET_ENVIRONMENT");
            
            var configurationBuilder = new ConfigurationBuilder()
                .SetBasePath(currentDirectory)
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{environment}.json", true)
                .AddEnvironmentVariables();
    
            Configuration = configurationBuilder.Build();
        }
    
        public IServiceProvider BuildServiceProvider()
        {
            var serviceCollection = new ServiceCollection();
            var options = Configuration.GetAWSOptions();
            serviceCollection.AddDefaultAWSOptions(options);
    
            serviceCollection.AddLogging(builder =>
            {
                builder.AddLambdaLogger(new LambdaLoggerOptions(Configuration));
    
                if (!Enum.TryParse<LogLevel>(Environment.GetEnvironmentVariable("AWS_LAMBDA_HANDLER_LOG_LEVEL") ?? "Information", out var absoluteMinimumLevel))
                {
                    absoluteMinimumLevel = LogLevel.Information;
                }
    
                builder.SetMinimumLevel(absoluteMinimumLevel);
            });
    
            serviceCollection.AddTransient<IProcessor, Processor>();
    
            // REST OF YOUR DEPENDENCIES HERE
    
            return serviceCollection.BuildServiceProvider();
        }
    }
    
    This class builds the IConfiguration instance from the various sources (add/remove as appropriate for your use-case). When the "BuildServiceProvider" method is called the DI container is setup, including wiring up the basic AWS/Lambda options and logging. The "IProcessor" is used as the entrypoint class called by FunctionHandler and becomes the top of the dependency graph for your application logic. For more information about the logger setup and how to configure the log levels in your appsettings.json file, see my other blog post here https://www.craigwardman.com/blog/dotnet-lambda-logger-levels
  • Published on
    When you are using the Microsoft ILogger abstraction in your C#/dotnet code and have an AWS Lambda composition root, it's useful to use the "Amazon.Lambda.Logging.AspNetCore" package to wire up the Lambda logging implementation. Start by adding a reference to the "Amazon.Lambda.Logging.AspNetCore" NuGet package and then add the below setup to your DI container bootstrapping code:
    
    context.Services.AddLogging(builder =>
    {
        builder.AddLambdaLogger(new LambdaLoggerOptions(configuration));
    
        if (!Enum.TryParse<LogLevel>(Environment.GetEnvironmentVariable("AWS_LAMBDA_HANDLER_LOG_LEVEL") ?? "Information", out var absoluteMinimumLevel))
        {
            absoluteMinimumLevel = LogLevel.Information;
        }
    
        builder.SetMinimumLevel(absoluteMinimumLevel);
    });
    
    You must pass in IConfiguration, which you have built using "ConfigurationBuilder", adding sources such as appsettings.json, with the below config section present:
    
    {
      "Lambda.Logging": {
        "LogLevel": {
          "Default": "Debug",
          "Microsoft": "Warning"
        }
      }
    }
    
    It's worth noting that while you can have granular control over the log levels by namespace (plus Default) in the config, there is still an "absolute minimum" enforced by both the LambdaLogger and Microsoft's logging framework which, when otherwise not configured, defaults to "Information". This means that "by default" you can't opt in to any logger below this level, regardless of the configuration. In order to override the "absolute minimum" (to include lower levels of log output), you can set the "AWS_LAMBDA_HANDLER_LOG_LEVEL" environment variable. This is automatically consumed internally by the AWS LambdaLogger and in the above code is propagated to the Microsoft logging builder so that both start from the same minimum and then apply the namespace level filtering on top.
  • Published on
    If you are building integration tests for ASP.NET WebAPIs using Microsoft's WebApplicationFactory and your API is secured with JWT bearer authentication, then at some point you'll probably want to mock the JWT authentication mechanism. Of course, you always have the option of including the "real" JWT issuer in the scope of your tests, but it can get quite tricky to automate the security checks of real IDPs! The first step is to create a class that will handle issuing the "fake" JWT tokens:
    
    public static class MockJwtTokens
    {
        public static string Issuer { get; } = Guid.NewGuid().ToString(); // random issuer
        public static SecurityKey SecurityKey { get; }
        public static SigningCredentials SigningCredentials { get; }
    
        private static readonly JwtSecurityTokenHandler TokenHandler = new();
        private static readonly RandomNumberGenerator Rng = RandomNumberGenerator.Create();
        private static readonly byte[] Key = new byte[32];
    
        static MockJwtTokens()
        {
            Rng.GetBytes(Key);
            SecurityKey = new SymmetricSecurityKey(Key) { KeyId = Guid.NewGuid().ToString() };
            SigningCredentials = new SigningCredentials(SecurityKey, SecurityAlgorithms.HmacSha256);
        }
    
        public static string GenerateJwtToken(IEnumerable<Claim> claims)
        {
            return TokenHandler.WriteToken(new JwtSecurityToken(Issuer, "YOUR-EXPECTED-AUDIENCE", claims, null, DateTime.UtcNow.AddMinutes(20), SigningCredentials));
        }
    }
    
    NOTE: If you validate the "audience" field in your system, then ensure the audience is set as you'd expect. The 2nd step is to replace the configuration of the SUT (system under test) to start trusting your "fake" issuer instead of the real one:
    
    public class WebTestFixture() : WebApplicationFactory<Program>
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            base.ConfigureWebHost(builder);
    
            builder.ConfigureTestServices(services =>
            {
                services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, options =>
                {
                    var config = new OpenIdConnectConfiguration()
                    {
                        Issuer = MockJwtTokens.Issuer
                    };
    
                    config.SigningKeys.Add(MockJwtTokens.SecurityKey);
                    options.Configuration = config;
                });
            });
        }
    }
    
    With that in place, you simply need to add the token with claims of your choice to your fixture's HttpClient. For example to add a token with an "email" claim:
    
    // extension method for adding an email claim JWT
    public static HttpClient WithUserCredentials(this HttpClient client, string jwtEmail)
    {
        if (string.IsNullOrEmpty(jwtEmail))
        {
            return client;
        }
    
        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer",
            MockJwtTokens.GenerateJwtToken([
                new Claim("email", jwtEmail)
            ]));
    
        return client;
    }
    
    
    // example usage
    using var client = fixture.CreateClient().WithUserCredentials(theEmail);
    
    Using extension methods is optional, structure it however you like and include as many claims as is required for your test scenario.
  • Published on
    When you're building .NET APIs you're comfortably surrounded by tooling for generating data and operation contract documentation automatically for you. The combination of SwaggerGen and SwaggerUI can automatically present consumers of your API with everything they need to understand the interface and to scaffold clients/DTOs - all based on the generated OpenAPI specification file, without much work as the developer. However when you're building something that isn't an "API" in the sense of "HTTP Request/Response", such as building serverless functions that process incoming messages you lose an awful lot of that tooling. In some eventing systems (such as Kafka) you have a schema registry, so you can use that to enforce validity and versioning of message data contracts for producers and consumers of that data (for example with an Avro schema). For simpler setups with no schema registry, it's still nice to have automatically generated documentation and schemas based on your source code. The reason I suggest generating your documentation for source is it ensures correctness - there's only one thing worse than missing documentation and that's wrong documentation. Also assuming you're versioning your software you'll have corresponding versioned documentation that sits alongside. The two goals of my approach are:
    • Produce JSON schema files for machine consumption
    • Produce Markdown files for human consumption

    JSON Schema

    Using JSON schemas are a great way to share data contracts in a language agnostic way, so consumers in any language can scaffold types from your schemas. They can also be used to validate incoming JSON before de-serializing it. Not only that but they plug into the OpenAPI specification, so you can reference them in your spec file (for example if you're exposing the data contract via an API Gateway endpoint). You can easily generate a JSON schema for a C# type using NJsonSchema, for example:
    
    var typesToGenerate = GetTypesToGenerate(); // todo: implement creating the list of types, e.g. using typeof(MyDotnetModels.SomeType).Assembly.GetTypes()
    
    if (!Directory.Exists("./generated"))
    {
        Directory.CreateDirectory("./generated");
    }
    
    foreach (var type in typesToGenerate)
    {
        var schema = JsonSchema.FromType(type);
    
        // add required to all non-nullable props
        foreach (var propertyInfo in type.GetProperties(BindingFlags.Public | BindingFlags.Instance))
        {
            var isNullable = Nullable.GetUnderlyingType(propertyInfo.PropertyType) != null;
            if (!isNullable)
            {
                schema.RequiredProperties.Add(propertyInfo.Name);
            }
        }
        
        var schemaJson = schema.ToJson();
        
        File.WriteAllText($"./generated/{type.Name}.schema.json", schemaJson);
    }
    
    Build that code snippet into a console app (e.g. called JsonSchemaGen) and you can now execute it whenever your build your source code for deployment and it will generate JSON schema files in the bin/generated folder.

    Markdown

    Now that we have JSON schemas, it's easy to generate markdown files using @adobe/jsonschema2md. Simply pass the location of schema files with any configurations of your choice, e.g.
    
    npx -y @adobe/jsonschema2md -h false \
    	-d [path_to_schema_files] \
    	-o markdown -x -
    
    That will generated a README.md and md files for each schema, with human readable content describing your data contracts.

    Using a makfile to combine it all

    This part is optional, but it's nice to have the commands necessary to perform the above steps checked in to source control and to use the power of "make" to run the necessary steps only when things change, e.g.
    
    .PHONY: clean build
    
    schemas: [relative_path_to_dotnet_source]/MyDotnetModels
    	cd JsonSchemaGen && \
    	dotnet build -c Release && \
    	cd bin/Release/net8.0 && ./JsonSchemaGen && \
    	mkdir -p ../../../../schemas && mv generated/* ../../../../schemas/
    	
    markdown: schemas
    	npx -y @adobe/jsonschema2md -h false \
    	-d ./schemas \
    	-o markdown -x -
    	
    build: markdown
    	
    clean:
    	rm -rf schemas markdown
    
    Summary of the steps:
    • schemas - depends on the dotnet models source code directory, so this runs whenever any file in that directory changes
      • Build the console app (including a reference to models) in release mode
      • Execute the built console app
      • Moves the generated files in another folder to make them easier to find
    • markdown - depends on the schemas, so this runs whenever the schemas change
      • Use npx to execute the @adobe/jsonschema2md and output to a directory called 'markdown'
    You can now incorporate "make build" of your documentation makefile into your CI process and store the "markdown" and "schemas" directories as build artifacts, alongside your system build artifacts. Now they are ready to be shipped with the system when it's released (e.g. put in a storage account, or hosted as a static website - for example).