Avatar

Blog (pg. 3)

  • Published on
    Recently I ran into a situation where I wanted to proxy the calls made by a client application (i.e. javascript AJAX calls) for a web application that was hosted on another machine. So the configuration looked something like below: https://www.mywebsite.com -> user machine (javascript) -> https://api.mywebsite.com/api-1/endpoint I didn't want to run "www.mywebsite.com" code on my machine, I wanted to run the deployed website in my browser, but I wanted all calls to "api.mywebsite.com/api-1" to be routed to my local development environment so I could either debug or mock the API responses as I wanted. The solution comprised of 3 basic elements:
    1. An Nginx reverse proxy running in Docker on my machine
    2. Self signed SSL certificates for TLS termination in Nginx
    3. Running Chrome with custom host resolver rules (this could also be done in /etc/hosts but I only wanted a temporary solution)
    If your API calls don't use HTTPS then you don't need the TLS termination, but in my case I do need it so I created some self-signed certificates that I will later trust within Chrome:
    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nginx-selfsigned.key -out nginx-selfsigned.crt
    Now create a reverse-proxy Nginx configuration file, like below:
    
    server {
        listen 0.0.0.0:80;
        listen 0.0.0.0:443 ssl;
        server_name host.docker.internal;
        ssl_certificate       /etc/ssl/certs/server.crt;
        ssl_certificate_key   /etc/ssl/certs/server.key;
        location /api-1/ {
            proxy_pass http://host.docker.internal:5001/;
        }
    }
    
    This essentially now routes the traffics from "https://localhost/api-1" to "http://host-machine:5001/" which is where I can run the development mode API. With those things in place, whenever I want to the run the deployed website against my local machine APIs, I can use the below commands:
    
    docker run --rm \
    -p 80:80 -p 443:443 \
    --name nginx-reverse-proxy \
    --add-host=host.docker.internal:host-gateway \
    -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf \
    -v $(pwd)/nginx-selfsigned.crt:/etc/ssl/certs/server.crt \
    -v $(pwd)/nginx-selfsigned.key:/etc/ssl/certs/server.key \
    nginx &
    google-chrome https://www.mywebsite.com/ --host-resolver-rules="MAP api.mywebsite.com 127.0.0.1"
    
    The first time you run it, Chrome won't trust your SSL certs, but if you open an API URL in a new tab and manually trust it, then the clientside calls will work :)
  • Published on
    Immersed is an application that allows you to connect to your computer from within Virtual Reality and use the full power of your Windows/Mac/Linux machine to run apps etc. but using virtual screens that are in a VR world. This can be useful if you want to get away from the home office environment and into a world where you can fully concentrate on the task at hand, or if you're travelling and can't take around your usual multi-screen physical setup. The Windows and Mac versions of Immersed allow you to create additional "virtual" monitors so that when you enter into VR you can see not only your physical display but one or more "virtual" displays too. Unfortunately the Linux version of the desktop agent does not yet support this feature, so if you have a 2 screen setup in the real world, then you'll see 2 screens in VR. However, there is a way of adding these screens in Linux using a tool called "xrandr" (works with X11). I originally tried the methods documented on the virtual-display-linux GitHub page, however this doesn't work when using Cinnamon desktop on Linux Mint as the additional virtual screen causes Cinnamon to crash on startup. It actually turns out that you don't need to create a virtual screen as long as you have some unused graphics ports on your machine, for example on my machine running xrandr -q shows:
    Screen 0: minimum 320 x 200, current 3200 x 1200, maximum 16384 x 16384
    eDP-1 connected (normal left inverted right x axis y axis)
       1920x1080     60.02 +  60.01    59.97    59.96    59.93    48.00  
       ...
    DP-1 disconnected (normal left inverted right x axis y axis)
    HDMI-1 disconnected (normal left inverted right x axis y axis)
    DP-2 disconnected (normal left inverted right x axis y axis)
    HDMI-2 disconnected (normal left inverted right x axis y axis)
    DP-3 disconnected (normal left inverted right x axis y axis)
    HDMI-3 disconnected (normal left inverted right x axis y axis)
    DP-3-1 disconnected (normal left inverted right x axis y axis)
    DP-3-2 connected primary 1600x1200+0+0 (normal left inverted right x axis y axis) 367mm x 275mm
       1600x1200     60.00*+
       ...
    DP-3-3 connected 1600x1200+1600+0 (normal left inverted right x axis y axis) 367mm x 275mm
       1600x1200     60.00*+
       ...
    
    The "eDP-1" adapter is my physical laptop screen (which is connected but with the lid closed is not active). Then my two physical monitors plugged into my docking station are both running under the "DP-3" display port, as "DP-3-2" and "DP-3-3".. This means I have "HDMI-1", "HDMI-2", "DP-3-1" all available to "plug something in". You don't actually need to physically plug something in to use these in VR though, so I can just activate one (or more) of them at my desired resolution and position it wherever I'd like it to appear when I enter VR. In my case I like to split my two physical monitors apart with a 3rd low res, wide monitor (1400x900) that makes working in VR easier. (This was on the Quest 2 headset, see update below for higher resolution headsets). For example:
    Real world:
    /--------\ /--------\
    | DP-3-2 | | DP-3-3 |
    \--------/ \--------/
    
    Virtual world:
    /--------\ /--------\ /--------\
    | DP-3-2 | | HDMI-1 | | DP-3-3 |
    \--------/ \--------/ \--------/
    
    To achieve this, I've written a shell script which will add the new display settings before starting up the Immersed Agent, and will then reset the settings when the process finishes:
    
    #!/bin/sh
    xrandr --addmode HDMI-1 1400x900
    xrandr --output DP-3-2 --pos 0x0 --output HDMI-1 --mode 1400x900 --right-of DP-3-2 --output DP-3-3 --right-of HDMI-1
    ~/.local/bin/Immersed/Immersed-x86_64.AppImage
    xrandr --output HDMI-1 --off --output DP-3-3 --right-of DP-3-2
    
    Update 2023 Since upgrading to the Meta Quest 3 I can now work in 1920x1080 on all my monitors as they're clearly visible even at this resolution. This means I no longer need to put my "low res" monitor in the middle and I actually prefer to keep my "primary" monitor in the middle (I've also switched away from using HDMI port as DP-3-1 was available). So the new configuration looks like the below:
    Real world:
    /--------\ /--------\
    | DP-3-2 | | DP-3-3 |
    \--------/ \--------/
    
    Virtual world:
    /--------\ /--------\ /--------\
    | DP-3-1 | | DP-3-2 | | DP-3-3 |
    \--------/ \--------/ \--------/
    
    See updated bash script below:
    
    #!/bin/sh
    xrandr --addmode DP-3-1 1920x1080
    xrandr --output DP-3-1 --mode 1920x1080 --left-of DP-3-2
    ~/.local/bin/Immersed/Immersed-x86_64.AppImage
    xrandr --output DP-3-1 --off
    
  • Published on
    In order to setup a context when using React hooks related to a Redux store (e.g. useDispatch, useSelector) you need to have your component nested inside of a "Provider" component (from the react-redux package). This isn't always possible as not all applications are built as a single "app" with components nested under a single root. In my case I am using ReactJs.Net together with a CMS to allow the end user to defined any combination of a number of pre-defined "components" on a page. It turns out that you don't need all components to be nested inside of the same "Provider" instance, as long as the "store" itself is a singleton then you can have many "Provider" instances on the page all sharing the same "store". I wanted an easy way to start wrapping my existing components in a "Provider" component without having to change too much about my non-Redux application structure. What I came up with was creating a simple high order component as a function and then simply wrapping my exported component with a call to the HOC when I want to wrap it with the Redux provider, e.g.
    
    import React from 'react';
    import { Provider } from 'react-redux';
    import { store } from './store';
    
    export const withReduxStore = Component => ({ ...props }) =>
        (<Provider store={store}><Component {...props} /></Provider>)
    
    This assumes you have singleton store, for example:
    
    import { createStore, applyMiddleware } from 'redux';
    import thunk from 'redux-thunk';
    import rootReducer from './ducks/rootReducer'
    
    const store = createStore(rootReducer, applyMiddleware(thunk));
    
    export { store };
    
    And now to update a component that previously didn't have access to the store context and give it access:
    
    import React from 'react';
    import { useDispatch } from 'react-redux';
    import { withReduxStore } from './state/withReduxStore.jsx';
    
    const MyExampleComponent = (props) => {
        const dispatch = useDispatch();
    
        return <>
            <button onClick={() => dispatch({hello: "world"})} type="button">Dispatch Something</button>
        </>
    }
    
    export default withReduxStore(MyExampleComponent); <-- simply wrap it with a call to "withReduxStore"
    
  • Published on
    I wouldn't necessarily recommend doing something like this in production code however I find this useful when writing SpecFlow tests - I want the Gherkin to call out a few key properties of a class, then I want to generate a "valid" instance (to pass any validation) but using the test data supplied. Imagine the following scenario:
    
    public class Customer
    {
       public string Firstname { get; set; }
       public string Surname { get; set; }
       public string EmailAddress { get; set; }       // validation states that this must be a valid email address
    }
    
    // imagine some kind of "valid instance builder" used in testing
    public static class TestCustomerBuilder
    {
          private static readonly Fixture Fixture = new();
          public static Customer AnyValidInstance()
         {
             return Fixture.Build<Customer>()
                       .With(c => c.EmailAddress, _fixture.Create<MailAddress>().Address) // make sure this passes validation by default
                       .Create();
         }
    }
    
    Now imagine you're writing some Gherkin that doesn't care about email - you're just testing something to do with Firstname and Surname, so you might write:
    Given the create customer request contains the below details
    | Firstname | Surname |
    | Hello     | World   |
    When the create customer endpoint is called
    Then a new customer is created with the following details
    | Firstname | Surname |
    | Hello     | World   |
    
    It's a contrived example, but you should see the point when it comes to implementing the step definitions I like to use the built in helpers from SpecFlow rather than "magic strings" as much as possible (as it makes the steps more re-usable) so how about the below:
    
    [Given("the create customer request contains the below details")]
    public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
    {
       _testContext.CustomerRequest = table.CreateInstance<Customer>();
    }
    
    The problem with the above is the created instance won't be valid, on account of it having no email address. You could code around this by manually only setting certain properties but that introduces the re-usability problem again. Enter the "model combiner" which is designed to copy all non-null properties from a source instance to a destination instance, e.g.:
    
    [Given("the create customer request contains the below details")]
    public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
    {
       var testDataInstance  = table.CreateInstance<Customer>();
       var validInstance = TestCustomerBuilder.AnyValidInstance();
    
       ModelCombiner.Comine(testDataInstance, validInstance);
    
       _testContext.CustomerRequest = validInstance;
    }
    
    Now the request contains a "valid" customer but also has our specific data taken from the Gherkin. The model binder class looks as below (which I got from an idea seen here: https://stackoverflow.com/questions/8702603/merging-two-objects-in-c-sharp)
    
    public static class ModelCombiner
    {
    	private static readonly HashSet<Type> SupportedTypes = new();
    
    	private static Mapper Mapper { get; } = new(new MapperConfiguration(expression =>
    	{
    		Setup<Customer>(expression);
    		Setup<SomeOtherType>(expression);
    	}));
    
    	public static T Combine<T>(T source, T destination)
    	{
    		if (!SupportedTypes.Contains(typeof(T)))
    			throw new InvalidOperationException(
    				$"Cannot combined unsupported type {typeof(T).FullName}. Please add it to the setup in {nameof(ModelCombiner)}");
    
    		return Mapper.Map(source, destination);
    	}
    
    	private static void Setup<T>(IProfileExpression expression)
    	{
    		SupportedTypes.Add(typeof(T));
    
    		expression.CreateMap<T, T>()
    			.ForAllMembers(opts => opts
    				.Condition((_, _, srcMember) => srcMember != null));
    	}
    }
    
    Another option I found online that looks worth a look: https://github.com/kfinley/TypeMerger
  • Published on
    FluentAssertions adds many helpful ways of comparing data in order to check for "equality" beyond a simple direct comparison (for example check for equivalence across types, across collections, automatically converting types, ignoring elements of types, using fuzzy matching for dates and more). Making a "fluent assertion" on something will automatically integrate with your test framework, registering a failed test if something doesn't quite match. e.g. to compare an object excluding the DateCreated element:
    
    actual.Should()
    	.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated));
    
    However, sometimes the "actual" value you want to make the assertion on is only available as part of a Moq Verify statement, which only supports matching based on a boolean return type. e.g.
    
    myMock.Verify(m => 
    	m.Method(It.Is<MyData>(actual => 
    		actual == expected));
    
    As you can see above, replacing "==" with a "Fluent" assertion is not possible out of the box. However there is a trick you can use by setting up the below helper method:
    
    public static class FluentVerifier
    {
    	public static bool VerifyFluentAssertion(Action assertion)
    	{
    		using (var assertionScope = new AssertionScope())
    		{
    			assertion();
    
    			return !assertionScope.Discard().Any();
    		}
    	}
     
    	public static async Task<bool> VerifyFluentAssertion(Func<Task> assertion)
    	{
    		using (var assertionScope = new AssertionScope())
    		{
    			await assertion();
    
    			return !assertionScope.Discard().Any();
    		}
    	}
    }
    
    Now you can nest the Fluent Assertion inside of the Verify statement as follows:
    
    myMock.Verify(m => 
    	m.Method(It.Is<MyData>(actual => 
    		FluentVerifier.VerifyFluentAssertion(() => 
    			actual.Should()
    			.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated), ""))));
    
    Note however that since Lambda expressions can't contain calls to methods with optional parameters, you must specify the "becauseArgs" parameter of the "BeEquivalentTo" method.