Avatar

Blog (pg. 7)

  • Published on
    Following on from my previous post on setting up Rancher/K8S on RancherOS in a VM in Windows for local development, a common task will be setting up container services within the cluster but then accessing those services from your local Windows machine (e.g. while developing in Visual Studio). In a lot of cases this is probably straightforward, either exposing ports directly using a service or using Ingress to route host headers to the correct internal service. However in the case of Kafka it's a bit more complex due to the way in which the brokers address themselves when the initial connection is received and the broker list is sent back. In a nutshell, the default Kafka setup from the Catalog Apps in Rancher binds the brokers to their POD IP, when the broker list is sent to Windows it cannot address these IPs (unless you want to set up some kind of natting). After some Googling and help from the following posts: https://rmoff.net/2018/08/02/kafka-listeners-explained/ https://github.com/helm/charts/issues/6670 I came up with the following instructions: STEP 1 (install Kafka in cluster): Install Kafka from the Rancher catalogue
    1. your-dev-cluster > default > Catalog Apps > Launch
    2. find and select "Kafka"
    3. switch off the "Topics UI Layer 7 Loadbalancer" (near the bottom) - don't need it in dev.
    4. click "Launch"
    5. .. Wait until all the kafka services are running ..
    6. You can now verify that the Landoop UI is running and seeing brokers by visiting the endpoint is has produced, e.g. http://rancherdev.yourdomain:30188 <-- random port, check what it says!!
    Kafka is now available in the cluster, but not from Windows. Continue with step 2 --> STEP 2 (expose Kafka externally): Change the Kafka startup command for multiport listening
    1. your-dev-cluster > default > workloads > kafka-kafka
    2. Three dots, click "Edit"
    3. Click "show advanced options"
    4. Under Command > Entrypoint - paste the following:
      sh -exc 'export KAFKA_BROKER_ID=${HOSTNAME##*-} && \export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092,EXT://rancherdev.yourdomain.com:$((9093 + ${KAFKA_BROKER_ID})) && \export KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,EXT:PLAINTEXT && \export KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT && \exec /etc/confluent/docker/run'
    5. Click "Upgrade"
    Add service discovery for the new ports
    1. your-dev-cluster > default > Service Discovery
    2. Click "View/Edit YAML" on kafka-kafka..
    3. Use the following lines for section "spec > ports" (assuming you have 3 instances of Kafka)
      ports:
        - name: broker
          port: 9092
          protocol: TCP
          targetPort: 9092
        - name: broker-ext0
          port: 9093
          protocol: TCP
          targetPort: 9093
        - name: broker-ext1
          port: 9094
          protocol: TCP
          targetPort: 9094
        - name: broker-ext2
          port: 9095
          protocol: TCP
          targetPort: 9095
      
    Configure nginx to use TCP ConfigMap
    1. your-dev-cluster > system > workloads > nginx-ingress-controller
    2. Three dots > edit
    3. Environment variables:
    4. "Add from Source" > "Config Map" > "tcp-services"
    5. Click "Upgrade"
    Expose the port using Ingress TCP ConfigMap
    1. your-dev-cluster > system > resources > config maps > ns: ingress-nginx > tcp-services
    2. Three dots, click "Edit"
    3. Add the following entries:
              - key = 9093
              - value = kafka/kafka-kafka:9093
              - key = 9094
              - value = kafka/kafka-kafka:9094
              - key = 9095
              - value = kafka/kafka-kafka:9095
      
    Reboot the kafka services
    1. your-dev-cluster > default > workloads > tick all and click 'redeploy'
    Now from Windows try telnet to rancherdev.yourdomain.com 9093/9094/9095 or even better from WSL bash, install kafkacat and run: kafkacat -b rancherdev.yourdomain.com:9093 -L
  • Published on
    Little cheat sheet for setting up a single node Kubernetes/Rancher on a developer machine using Hyper-V without tying it to the DHCP IP address that was issued at the time of creation. Setup Rancher on RancherOS
    1. Download the RancherOS Hyper-V ISO image from the GitHub repo
    2. Setup a Hyper-V VM with the bootable ISO set as the boot device (with Internet connectivity - I used 4 vCPU, 16GB RAM and 500GB vHDD)
    3. Boot the VM and allow Linux to boot
    4. Type the following command (uses a password to avoid SSH keys):
      sudo ros install -d /dev/sda --append "rancher.password=yourpassword"
      
    5. Reboot and skip the CD boot step (i.e. boot from the hard disk)
    6. Login with "rancher" and "yourpassword" - at this point you may wish to get the IP and switch to another SSH client such as PuTTY and login from there.
    7. Create an SSL certificate for your "rancherdev" domain - from your rancher home directory
      docker run -v $PWD/certs:/certs -e SSL_SUBJECT="rancherdev.yourdomain.com" paulczar/omgwtfssl
      
    8. Optionally, you can now delete this container/image from Docker
    9. Run the following command to start Rancher in a Docker container (with persistent storage and custom SSL certificate)
      docker run -d -v /mnt/docker/mysql:/var/lib/mysql -v $PWD/rancher:/var/lib/rancher -v $PWD/certs/cert.pem:/etc/rancher/ssl/cert.pem -v $PWD/certs/key.pem:/etc/rancher/ssl/key.pem -v $PWD/certs/ca.pem:/etc/rancher/ssl/cacerts.pem --restart=unless-stopped -p 8080:80 -p 8443:443 rancher/rancher
      
    10. In order to internally resolve the custom rancherdev domain in RancherOS, add a loopback record it to the hosts file
      echo "127.0.0.1 rancherdev.yourdomain.com" | sudo tee -a /etc/hosts > /dev/null
      
    11. Rancher should now be running on the VM's public IP (run "ifconfig" to get your VM IP if you don't have it already)
    12. On your host OS (e.g. Windows) add this IP to the hosts file against "rancherdev.yourdomain.com" (c:\windows\system32\drivers\etc\hosts)
    13. Browse to the https://rancherdev.yourdomain.com:8443 in your web browser
    14. Follow the wizard to setup password/servername etc. for Rancher
    Create a new Kubernetes cluster using Rancher
    1. In the Rancher browser UI - select to add a new cluster
    2. Choose "Custom" and use all the defaults, no cloud provider, [I disabled recurring etcd snapshots in the advanced options since this is a dev setup] - click Next
    3. In the next screen, choose all the Node Roles (etcd, Control Plane, Worker) - expand Advanced options and set the public and internal address to be 127.0.0.1 to ensure the node can survive an external IP change (or another copy running)
    4. Copy the generated Docker command to the clipboard and press Done - it should look something like this:
      sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.6 --server https://rancherdev.yourdomain.com:8443 --token XXX --ca-checksum XXX --node-name my-dev-node --address 127.0.0.1 --internal-address 127.0.0.1 --etcd --controlplane --worker
    5. Paste and run the command in the RancherOS shell
    6. Rancher should then provision the Kubernetes cluster
    NB. Any links generated by the Rancher UI to containers you install will use "127.0.0.1" as the URL which is of course wrong from your host OS. You will need to manually enter the URL as rancherdev.yourdomain.com Surving an IP Change If you fire up the VM for the first time on another machine or your DHCP recycles and your external IP changes, you will need to follow these steps to get up and running:
    1. Run the VM as normal in Hyper-V
    2. Login via the Hyper-V console with rancher/yourpassword
    3. Get the IP address of the running RancherOS
      ifconfig
    4. Update your Windows host file (c:\windows\system32\drivers\etc\hosts) with and entry for rancherdev.yourdomain.com pointing to the VM IP
    5. Browse to the rancher URL and give it some time to come back online
  • Published on
    I have used Redis caching with the StackExchange.Redis client in .NET across various projects and each time I find myself solving the same problems. The main problem, aside from abstracting the client and solving a few other issues (see below), is usually that my JSON data is bigger than Redis would like and it starts to perform badly or throws errors because the "qs" is full. I know there are other serialisation formats to try which might save some space, but my preference is to continue with JSON. I have created a GitHub repository called ChunkingRedisClient, which wraps up this boilerplate functionality in a central place. You can also install the current build as a NuGet package. Below is the write-up from the README: ---
    # Chunking Redis Client
    A library which wraps the StackExchange.Redis client, specifically using JSON serialisation, and adds functionality such as chunked reading/writing and sliding expiration.
    
    The purpose of this library is to create a re-usable library of code (NB. which I need to put into a NuGet package) for wrapping the StackExchange.RedisClient and solving the issues I usually need to solve.
    
    Those being:
    
    * IoC wrappers/abstractions
       - Just take your dependency on "IRedisClient<TKey, TItem>"
       - By default you should configure your DI container to inject the provided RedisClient<TKey, TItem>
       - Since IoC is used throughout you also need to configure:
         ~ IRedisWriter<TKey, Item> -> JsonRedisWriter or ChunkedJsonRedisWriter
         ~ IRedisReader<TKey, Item> -> JsonRedisReader or ChunkedJsonRedisReader
         ~ IRedisWriter<TKey, Item> -> JsonRedisDeleter or ChunkedJsonRedisDeleter
         (note: for one combination of TKey, TItem - ensure the decision to chunk or not is consistent)
         ~ IKeygen<TKey> to an object specific implementation, like GuidKeygen
         ~ For chunking, locking is required:
                 IRedisLockFactory -> RedisLockFactory
                 To override the default of InMemoryRedisLock, call RedisLockFactory.Use<IRedisLock>() <-- your class here
         
    * Strongly typed access to the cache
      - Use any C# object as your TKey and TItem, given that:
          ~ Your TKey is unique by GetHashCode(), or implement your own Keygen
          ~ Your TItem is serialisable by Newtonsoft.Json
          
    * Implementing the StackExchange Connection Multiplexer
      - This is handled by the RedisDatabaseFactory
      - Not using the usual "Lazy<ConnectionMulitplexer>" approach, as I want to support one multiplexer per connection string (if your app is dealing with more than 1 cache)
      - The multiplexers are stored in a concurrent dictionary where the connection string is the key
      - The multiplexer begins connecting asynchronously on first use
        
    * Sliding expiration of cache keys
      - Pass in the optional timespan to read methods if you want to use sliding expiration
      - This updates the expiry when you read the item, so that keys which are still in use for read purposes live longer
      
    * Chunked JSON data
      - This solves a performance issue whereby Redis does not perform well with large payloads.
      - Sometimes you may also have had errors from the server when the queue is full.
      - The default chunk size is 10KB which can be configured in the ChunkedJsonRedisWriter
      - The JSON data is streamed from Newtonsoft into a buffer. Every time the buffer is full it is written to Redis under the main cache key with a suffix of "chunkIndex"
      - The main cache key is then written to contain the count of chunks, which is used by the reader and deleter.
      
    * Generating keys for objects
      - I don't like using bytes for keys as they are not human readable, so I like to generate unique strings
      - There is no none-intrusive way of providing a type agnostic generic keygen, therefore you must write your own. If you write something for a CLR type, considering contributing it to the project!
      - Since we know Guids are unique, I have demonstrated the ability to create custom keygens.
    
    
    The code can be extended to support other serialisation types (TODO), distributed locks (TODO), different ways of generating keys or whatever you need it to do.
    
  • Published on
    In DDD most objects can be categorised as either value types or entities. Value types being objects where there is not one identifier, but simply a collection of related properties; entities being where the ID of the object is the ultimate identifier and all other properties are attributes of this entity. For me, the desired functionality in terms of equality comparisons is that entities are "Equal" when they have the same ID.. Value types are equal when they have matching "composite key" - i.e. all the properties of the object. To model this I have created a base class for enforcing value equality and a more specialised base for an entity:
    
    public abstract class ValueEqualityObject<T> : IEquatable<T>
    {
        public sealed override bool Equals(object obj)
        {
            if (obj is null)
                return false;
    
            if (ReferenceEquals(obj, this))
                return true;
    
            if (GetType() != obj.GetType())
                return false;
    
            return Equals((T)obj);
        }
    
        public sealed override int GetHashCode()
        {
            return TupleBasedHashCode();
        }
    
        public abstract bool Equals(T other);
    
        protected abstract int TupleBasedHashCode();
    }
    
    public abstract class Entity<TId> : ValueEqualityObject<Entity<TId>>
        {
            protected Entity(TId id)
            {
                Id = id;
            }
    
            public TId Id { get; }
    
            protected override int TupleBasedHashCode()
            {
                return (Id).GetHashCode();
            }
    
            public override bool Equals(Entity<TId> other)
            {
                return other != null 
                    && other.Id.Equals(Id);
            }
        }
    
    Now for each domain type I can choose which base to inherit from. For entities I simply define the ID, for value types I am prompted to define a TupleBasedHashCode and Equals method. The TupleBasedHashCode is a reminder to myself on a my preferred strategy for GetHashCode which is use the built-in Tuple implementation :)
  • Published on
    The Lodash memoize function caches a function call and can vary the cache items based on the parameters. By default the "cache key" is the first parameter, but often it's useful to vary by all parameters. Here is a simple wrapper that will use a custom resolver to always cache based on all args passed to the function. With this code in place, simply import this file instead of lodash version into your consuming code.
    
    import _memoize from 'lodash-es/memoize';
    
    export default function memoize(func)
    {
        const resolver = (...args) => JSON.stringify(args);
    
        return _memoize(func, resolver);
    }