Avatar

Blog (pg. 4)

  • Published on
    In order to setup a context when using React hooks related to a Redux store (e.g. useDispatch, useSelector) you need to have your component nested inside of a "Provider" component (from the react-redux package). This isn't always possible as not all applications are built as a single "app" with components nested under a single root. In my case I am using ReactJs.Net together with a CMS to allow the end user to defined any combination of a number of pre-defined "components" on a page. It turns out that you don't need all components to be nested inside of the same "Provider" instance, as long as the "store" itself is a singleton then you can have many "Provider" instances on the page all sharing the same "store". I wanted an easy way to start wrapping my existing components in a "Provider" component without having to change too much about my non-Redux application structure. What I came up with was creating a simple high order component as a function and then simply wrapping my exported component with a call to the HOC when I want to wrap it with the Redux provider, e.g.
    
    import React from 'react';
    import { Provider } from 'react-redux';
    import { store } from './store';
    
    export const withReduxStore = Component => ({ ...props }) =>
        (<Provider store={store}><Component {...props} /></Provider>)
    
    This assumes you have singleton store, for example:
    
    import { createStore, applyMiddleware } from 'redux';
    import thunk from 'redux-thunk';
    import rootReducer from './ducks/rootReducer'
    
    const store = createStore(rootReducer, applyMiddleware(thunk));
    
    export { store };
    
    And now to update a component that previously didn't have access to the store context and give it access:
    
    import React from 'react';
    import { useDispatch } from 'react-redux';
    import { withReduxStore } from './state/withReduxStore.jsx';
    
    const MyExampleComponent = (props) => {
        const dispatch = useDispatch();
    
        return <>
            <button onClick={() => dispatch({hello: "world"})} type="button">Dispatch Something</button>
        </>
    }
    
    export default withReduxStore(MyExampleComponent); <-- simply wrap it with a call to "withReduxStore"
    
  • Published on
    I wouldn't necessarily recommend doing something like this in production code however I find this useful when writing SpecFlow tests - I want the Gherkin to call out a few key properties of a class, then I want to generate a "valid" instance (to pass any validation) but using the test data supplied. Imagine the following scenario:
    
    public class Customer
    {
       public string Firstname { get; set; }
       public string Surname { get; set; }
       public string EmailAddress { get; set; }       // validation states that this must be a valid email address
    }
    
    // imagine some kind of "valid instance builder" used in testing
    public static class TestCustomerBuilder
    {
          private static readonly Fixture Fixture = new();
          public static Customer AnyValidInstance()
         {
             return Fixture.Build<Customer>()
                       .With(c => c.EmailAddress, _fixture.Create<MailAddress>().Address) // make sure this passes validation by default
                       .Create();
         }
    }
    
    Now imagine you're writing some Gherkin that doesn't care about email - you're just testing something to do with Firstname and Surname, so you might write:
    Given the create customer request contains the below details
    | Firstname | Surname |
    | Hello     | World   |
    When the create customer endpoint is called
    Then a new customer is created with the following details
    | Firstname | Surname |
    | Hello     | World   |
    
    It's a contrived example, but you should see the point when it comes to implementing the step definitions I like to use the built in helpers from SpecFlow rather than "magic strings" as much as possible (as it makes the steps more re-usable) so how about the below:
    
    [Given("the create customer request contains the below details")]
    public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
    {
       _testContext.CustomerRequest = table.CreateInstance<Customer>();
    }
    
    The problem with the above is the created instance won't be valid, on account of it having no email address. You could code around this by manually only setting certain properties but that introduces the re-usability problem again. Enter the "model combiner" which is designed to copy all non-null properties from a source instance to a destination instance, e.g.:
    
    [Given("the create customer request contains the below details")]
    public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
    {
       var testDataInstance  = table.CreateInstance<Customer>();
       var validInstance = TestCustomerBuilder.AnyValidInstance();
    
       ModelCombiner.Comine(testDataInstance, validInstance);
    
       _testContext.CustomerRequest = validInstance;
    }
    
    Now the request contains a "valid" customer but also has our specific data taken from the Gherkin. The model binder class looks as below (which I got from an idea seen here: https://stackoverflow.com/questions/8702603/merging-two-objects-in-c-sharp)
    
    public static class ModelCombiner
    {
    	private static readonly HashSet<Type> SupportedTypes = new();
    
    	private static Mapper Mapper { get; } = new(new MapperConfiguration(expression =>
    	{
    		Setup<Customer>(expression);
    		Setup<SomeOtherType>(expression);
    	}));
    
    	public static T Combine<T>(T source, T destination)
    	{
    		if (!SupportedTypes.Contains(typeof(T)))
    			throw new InvalidOperationException(
    				$"Cannot combined unsupported type {typeof(T).FullName}. Please add it to the setup in {nameof(ModelCombiner)}");
    
    		return Mapper.Map(source, destination);
    	}
    
    	private static void Setup<T>(IProfileExpression expression)
    	{
    		SupportedTypes.Add(typeof(T));
    
    		expression.CreateMap<T, T>()
    			.ForAllMembers(opts => opts
    				.Condition((_, _, srcMember) => srcMember != null));
    	}
    }
    
    Another option I found online that looks worth a look: https://github.com/kfinley/TypeMerger
  • Published on
    Update 2025! With FluentAssertions no longer free software for commercial use a lot of projects are adopting Shouldly instead. If you are no longer using FluentAssertions but would still like to do compare objects based on configurable equivalency (include/exclude members etc.) you can use DeepEqual which already supports returning boolean with ".Compare()" and therefore can be used within Moq verify. ----- Original article: FluentAssertions adds many helpful ways of comparing data in order to check for "equality" beyond a simple direct comparison (for example check for equivalence across types, across collections, automatically converting types, ignoring elements of types, using fuzzy matching for dates and more). Making a "fluent assertion" on something will automatically integrate with your test framework, registering a failed test if something doesn't quite match. e.g. to compare an object excluding the DateCreated element:
    
    actual.Should()
    	.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated));
    
    However, sometimes the "actual" value you want to make the assertion on is only available as part of a Moq Verify statement, which only supports matching based on a boolean return type. e.g.
    
    myMock.Verify(m => 
    	m.Method(It.Is<MyData>(actual => 
    		actual == expected));
    
    As you can see above, replacing "==" with a "Fluent" assertion is not possible out of the box. However there is a trick you can use by setting up the below helper method:
    
    public static class FluentVerifier
    {
    	public static bool VerifyFluentAssertion(Action assertion)
    	{
    		using (var assertionScope = new AssertionScope())
    		{
    			assertion();
    
    			return !assertionScope.Discard().Any();
    		}
    	}
     
    	public static async Task<bool> VerifyFluentAssertion(Func<Task> assertion)
    	{
    		using (var assertionScope = new AssertionScope())
    		{
    			await assertion();
    
    			return !assertionScope.Discard().Any();
    		}
    	}
    }
    
    Now you can nest the Fluent Assertion inside of the Verify statement as follows:
    
    myMock.Verify(m => 
    	m.Method(It.Is<MyData>(actual => 
    		FluentVerifier.VerifyFluentAssertion(() => 
    			actual.Should()
    			.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated), ""))));
    
    Note however that since Lambda expressions can't contain calls to methods with optional parameters, you must specify the "becauseArgs" parameter of the "BeEquivalentTo" method.
  • Published on
    I have just released the initial version of my new open source project which is designed to allow one set of integration tests to run against in-memory fakes and against a "real" repository using only pre-compiler directives. This is useful when you have a suite of SpecFlow tests that you want to quickly run locally whilst developing (e.g. with NCrunch) and on a cloud build server (e.g. GitHub) where you don't always want to hit "real" data resources, but want the flexibility of occasionally switching to "real data mode" and running the same set of tests against a real MongoDB or SQL server (for example). The initial version has one backing store implementation, for MongoDB, but I'm hoping by making this open source other backing stores can be added over time. You can read more about it on the GitHub page here: TestDataDefinitionFramework
  • Published on
    Since the advent of WSL I spend most of my time using Bash to perform my CLI tasks in Windows. Sometimes however, I'd like to run a command as though I were running it in CMD.. (I'm not talking about wslview though). The example that springs to mind is when starting a dotnet web application, where I'd like it to bind on the Windows IP/Port not on the WSL one. So although I could run "dotnet run" from Bash, I actually want to run "dotnet run" from Windows (with minimal effort of course) For this I've created a bash alias called "winrun" which looks as follows:
    
    alias winrun='cmd.exe /c start cmd /k'
    
    So now if I'm in Bash and want to run a dotnet project I just type:
    
    winrun dotnet run