Waldo Codes

Pragmatic insights on software craftsmanship and other topics.

Over the last few years, our team at work has been busy breaking our monolithic application into microservices. At times in the past, we had entertained the thought of using Docker in development. However, due to the reasons outlined in a previous post, we chose to launch processes as needed and debug project by project. A few months back, I re-evaluated and decided that the bottlenecks to productivity were holding us back. We had reached a point where it was often necessary to have 3 or 4 instances of Visual Studio running to debug through the various microservices. When debugging, we had to manually determine which microservices to run. While this approach kept the memory footprint low, it was tedious. Our local machines had installations of RabbitMQ, Elasticsearch, GrayLog, etc. that needed to be maintained. All these little friction points added up, and so it was time to try Docker again.

Visual Studio has built-in integrations with Docker. Setting up a Visual Studio project or solution with Docker and Docker Compose is relatively simple. However, when you want to pull multiple projects into Docker Compose, it requires a bit more planning. For Docker Compose to work in Visual Studio, all the projects and their Docker files must reside in one solution. This was the first challenge, as our applications are stored per project, per repository. Each application has its own Visual Studio solution (.sln) file, containing multiple sub-projects, and its own Git repository. The simplest way to make this work was to create a new Docker Compose solution that referenced all the application microservice projects.

The Setup

To illustrate, here is what the folder structure of our Git repositories looks like on disk:

C:\Repos – AMS.AxIntegration – AMS.Companies – AMS.Contracts – AMS.Documents – AMS.Shipments

To this structure, we added a new repository: AMS.Docker. Within the AMS.Docker folder, there is one Visual Studio solution that references all the existing Visual Studio projects. Each .NET Core project has its own Docker file. Visual Studio provides integration to add the file and fill it with boilerplate code that will run your application in a container.

Visual Studio adds some flags into the project file that define how Visual Studio interacts with Docker. In debug mode, upon the first run of the application, Docker downloads the container image used to run the application. It uses the Docker file defined in the project. However, be aware that Docker uses settings from launchSettings.json. The ports specified here are where the app launches. Secondly, Fast Stage mode is used. The image is built fully one time, then cached. The project files are mounted into the image so that debugging changes are reflected immediately without a rebuild of the Docker container. Depending on the complexity of the application being containerized, it's possible the application won't work correctly without some modifications to the Docker file.

Here is a typical Docker file for one of our APIs:

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

# Add IP & Network Tools
RUN apt-get update
RUN apt-get install iputils-ping -y
RUN apt-get install net-tools -y
RUN apt-get install dnsutils -y

# Continues Build
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["AMS.Companies.API/AMS.Companies.API.csproj", "AMS.Companies.API/"]
COPY ["AMS.Companies.Domain/AMS.Companies.Domain.csproj", "AMS.Companies.Domain/"]
COPY ["AMS.Companies.Infrastructure/AMS.Companies.Infrastructure.csproj", "AMS.Companies.Infrastructure/"]
RUN dotnet restore "AMS.Companies.API/AMS.Companies.API.csproj"
COPY . .
WORKDIR "/src/AMS.Companies.API"
RUN dotnet build "AMS.Companies.API.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "AMS.Companies.API.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AMS.Companies.API.dll"]

Docker Compose

Docker Compose is a tool that lets you define and manage multi-container applications in a declarative way. It handles launching an entire environment, including all the containers, networking, and third-party dependencies, described in a single configuration file. Before setting up Docker Compose, make sure each Docker file in isolation will start and run each individual application.

Here is the Docker Compose file that ties together the microservices that compose our application:

version: '3.4'
name: ams

services:

  ams-ax-integration:
    image: ams-ax-integration-api
    container_name: ams-ax-integration
    build:
      context: .
      dockerfile: ../AMS.AxIntegration/source/AMS.AxIntegration.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5004:80"
    networks:
      - default

  ams-companies:
    image: ams-companies-api
    container_name: ams-companies
    build:
      context: .
      dockerfile: ../AMS.Companies/source/AMS.Companies.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5005:80"
    networks:
      - default

  ams-contracts:
    image: ams-contracts-api
    container_name: ams-contracts
    build:
      context: .
      dockerfile: ../AMS.Contracts/source/AMS.Contracts.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
        #NOTE: I can't get this env override to work, TODO: Figure it out for now going into app settings...
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5006:80"
    networks:
      - default

  ams-documents:
    image: ams-documents-api
    container_name: ams-documents
    build:
      context: .
      dockerfile: ../AMS.Documents/source/AMS.Documents.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5007:80"
    networks:
      - default
 
  ams-shipments:
    image: ams-shipments-api
    container_name: ams-shipments
    build:
      context: .
      dockerfile: ../AMS.Shipments/source/AMS.Shipments.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5008:80"
    networks:
      - default

  ams-sqlserver:
    image: mcr.microsoft.com/mssql/server
    container_name: ams-sqlserver
    restart: always
    ports:
      - "1435:1433"
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=Sql0nLinux?!
    volumes:
      - sql-data:/var/opt/mssql
    networks:
     - default

  ams-elastic:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.4.1
    container_name: ams-elastic
    environment:
      - node.name=es01
      # No cluster here (just 1 ES instance)
      #- cluster.name=es-docker-cluster
      #- discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.transport.ssl.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - default

  ams-rabbitmq:
    image: rabbitmq:3-management
    container_name: ams-rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=guest
      - RABBITMQ_DEFAULT_PASS=guest
    ports:
      - "5672:5672"      # AMQP port
      - "15673:15672"    # Management plugin port
    networks:
     - default

 # Elasticsearch - For Graylog
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    container_name: elasticsearch
    environment:
      - "discovery.type=single-node"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - default
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5
     
  # MongoDB - For Graylog
  mongodb:
    image: mongo:4.4
    container_name: mongodb
    volumes:
      - mongodb_data:/data/db
    networks:
      - default
    healthcheck:
      test: ["CMD", "mongo", "--eval", "db.runCommand('ping')"]  # The health check command
      interval: 30s  # The interval between health checks
      timeout: 10s  # The timeout for each health check
      retries: 5 

   #Graylog
  graylog:
    image: graylog/graylog:4.0
    container_name: graylog
    environment:
      - GRAYLOG_HTTP_EXTERNAL_URI=http://localhost:9000/
      - GRAYLOG_ROOT_PASSWORD=admin
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
        mongodb:
          condition: service_healthy
        elasticsearch:
          condition: service_healthy
    ports:
      - "9000:9000"
      - "12201:12201/udp"
      - "1514:1514"
    volumes:
      - graylog_data:/usr/share/graylog/data
    networks:
      - default

  ams-seq:
    image: datalust/seq:latest
    environment:
    - ACCEPT_EULA=Y
    ports:
        - 5341:80
      
# Ubuntu 20.04 does not have networking setup. 
# The following driver options fix things up so we can access our local network.
networks:
  default:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.default_bridge: "true"
      com.docker.network.bridge.enable_icc: "true"
      com.docker.network.bridge.enable_ip_masquerade: "true"
      com.docker.network.bridge.host_binding_ipv4: "0.0.0.0"
      com.docker.network.bridge.name: "docker0"
      com.docker.network.driver.mtu: "1500"

volumes:
  data01:
    driver: local
  sql-data:
  mongodb_data:
  graylog_data:
  elasticsearch_data:

From within Visual Studio, this Docker Compose project can be booted like any other VS project. With VS handling the mounting of newly compiled code into the containers, things stay relatively responsive. One drawback is that currently there is no support for 'Edit and Continue' within a debug session. This is a small tradeoff when considering the big picture.

Overall, this setup has been much easier to manage as a developer. It simplifies the setup of a new development environment and helps eliminate inconsistencies in local environments. If you haven't tried setting up Docker for development, you might be surprised at how effective it can be.

My reading list is full of books that in some ways have inspired me. I’ve had a unique privilege lately of reading a book written by my uncle. It is now on the reading list. The book title is “The Art of the Question: How to Expand Business Ideas, Strengthen Relationships, and Lead Any Conversation”. The title is a decent summary, but I’ll expand on that.

After reading through the book, I can categorize it as a “Business & Self Help” book. It is similar to books like “7 Habits of Highly Effective People – Covey” and “How to Win Friends and Influence People – Carnegie“. The big difference is the focus of this book is very narrow. It's all about questions. I use questions often in conversation, but I had not considered all the facets of question use. This book provides that valuable insight.

Building relationships is a thread that runs throughout the book. Relationships are important in personal life and business. Questions are one of the primary ways we show interests in the lives of others. The book cautions us to examine our motives when asking questions. It looks at personality differences as well. By building on common ground and true respect, often these differences can be set aside. Aspects of trust, emotions and the feeling of safety in relationships are also examined.

The book is targeted toward board rooms and business leaders. It zeros in on how to have productive meetings. Meetings where things don't dissolve into chaos. These tips are helpful for any sort of meeting. When reading I like to ask how does this lesson apply to me? Over the years as a software engineer, I've embraced Domain Driven Design (DDD). I've found that DDD tends to provide a greater value to the business than other techniques. Why? Because DDD focuses first on understanding the businesses core knowledge domain. The process of building great software for a business is all about understanding. Developers need to understand what to build and that is all about asking the right questions.

Asking the right questions helps arrive at the right solution. Breaking down a business domain to codify it, often results in lots of questions. I've worked on projects where the right questions bring to light process improvements. Process change is often cheaper and more impactful than improved software. The right line of questioning pulls out the non-technical commonsense solutions. These moments are delightful and a win for the business as software isn’t cheap to create and maintain.

Life itself is a process of continual improvement. We try things, we fail, we learn and hopefully we get better. Reading lets us build on the wisdom of others. It provides insight that otherwise may take many lifetimes to gain. It’s not always easy to self-reflect and target areas for improvement, but it’s always worthwhile. The Art of the Question, will give you a gentle nudge in that direction.

Hey! Where’s the context?

If you're not familiar with Hangfire, it's a background job runner. Long running tasks enqueue to a temporary storage medium, then dequeued and processed. If you've ever used Hangfire you may have found out that your background code lost access to the app context. This post will outline the issue and one possible solution.

Let's first take a look at how Hangfire works. It serializes the parameters passed to the function you call to run on a background thread. As jobs are ran, job parameters get deserialized and passed back into your function.

Hangfire is running your code on a thread outside the context of your application. Ambient contexts such as HttpContext are unavailable. This is the reason you can't access the apps context from within a background job.

One simple solution is to pass values from the context into your job as a parameter. This is simple, but if you have a large application with several background tasks it may not be a good solution. Do you want to add another parameter to all Hangfire jobs?

There is another strategy we can use to get context data into Hangfire jobs. Hangfire, like mediatr or the .net webstack has a pipeline. Using filters provided in the Hangfire pipeline, job data can be enhanced inline.

Filters

Pipeline actions are abstracted to just two interfaces: IClientFilter and IServerFilter. When jobs are enqueued, filters of type IClientFilter are invoked.

/// <summary> Client filter runs before the Hangfire job runs. </summary>
public class ClientExampleFilter : IClientFilter {
  private readonly IUserService _userService;

  public ClientExampleFilter(IServiceScopeFactory serviceScopeFactory) {
    var scopeFactory = serviceScopeFactory.CreateScope();
    _userService = scopeFactory.ServiceProvider.GetService<IUserService>();
  }

  public void OnCreated(CreatedContext filterContext) {}

  public void OnCreating(CreatingContext filterContext) {
    if (filterContext == null) {
      throw new ArgumentNullException(nameof(filterContext));
    }

    var user = filterContext.GetJobParameter<string>("User");

    if (string.IsNullOrEmpty(user)) {
      var userVal = _userService.GetCurrentUser();
      filterContext.SetJobParameter("User", userVal);
    }
  }
}

Filter values are pushed into jobs as parameters when enqueued and picked back up when ran. Jobs filters can inherit from JobFilterAtrribute and be scoped per job. Or they can be registered globally.

The interface for IServerFilter matches that of IClient filter. The only difference is when the filter runs. IServerFilter implementations are ran when the job is invoked. Look at the client example and then imagine the IServerFilter code ;^)

The Hangfire documentation gives an example of filters, it’s worth a read. However, it does not give an example of how to handle dependency injection. In the example above note that IServiceFactoryScope gets injected. A new scope is created from that factory, and services are resolved off that scope. This needs done because of how dependency resolution works in the Hangfire pipeline. Resolving scoped services directly will result in error.

Note: Services are scoped to the lifetime of the Hangfire job. If you have nested background jobs, the nested jobs hitting the filter will have a null context.

Activators

Hangfire, has another concept called an activator. Activators, like filters may be set per job or registered globally. They give access to the jobs context, parameters, method invoked, and the DI scope used for the job. We can use the activator to pull values out of the jobs parameters and do something with it. Below is an example where a job parameter is pulled from the current job and used to set a property on a scoped service.

  /// <summary>
/// Handles DI injection activation.
/// </summary>
public class ExampleJobActivator : AspNetCoreJobActivator {
  public ExampleJobActivator(
      [ NotNull ] IServiceScopeFactory serviceScopeFactory)
      : base(serviceScopeFactory) {}

  public override JobActivatorScope BeginScope(JobActivatorContext context) {
    var scope = base.BeginScope(context);

    var user = context.GetJobParameter<string>("User");

    var userProvider = (IUserProvider) scope.Resolve(typeof(IUserProvider));
    userProvider.CurrentUser = user;

    return scope;
  }
}

Here you can see we are overriding AspNetCoreJobActivator. If you aren’t using .net core, you may need to override JobActivator. If that is the case, you may also need to override JobActivatorScope.

Configuring Hangfire

In ASP.Net Core, Hangifre is configured with a set of extensions methods. These methods dangle off IServiceCollection. This familiar builder pattern makes setup simple. I was unable to find an example that showed how to get access to the DI container. Thankfully, it’s simple. The method AddHangfire() has an override that includes ‘provider’. This gives access to the DI container allowing IServiceScopeFactory to be injected into the filter and activator instances.

/// <summary>
/// Hangfire Service Config
/// </summary>
/// <param name="services"></param>
public void ConfigureHangfireServices(IServiceCollection services) {
  services.AddHangfire(
      (provider, config) =>
          config
              .UseFilter(new ExampleFilter(
                  (IServiceScopeFactory)
                      provider.GetService(typeof(IServiceScopeFactory))))
              .UseActivator(new ExampleJobActivator(
                  (IServiceScopeFactory)
                      provider.GetService(typeof(IServiceScopeFactory))))
              .UseSqlServerStorage(
                  Configuration.GetConnectionString("HangfireDb"),
                  new SqlServerStorageOptions{
                      CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
                      UseRecommendedIsolationLevel = true,
                      DisableGlobalLocks = true}));

  services.AddHangfireServer();
}

This example is basic. Give it some thought and see if carrying some context data through to Hangfire could improve your codebase.

UnitsNet is a popular framework for unit of measure conversion in applications. One nice aspect of the framework is that it handles unit serialization to JSON. The documentation gives an example of a unit of measure that’s been serialized to JSON.

{ "weight": { "unit": "MassUnit.Kilogram", "value": 90.0 } }

Thus, an array of values would be represented as the following.

{ "weights": [ { "unit": "MassUnit.Kilogram", "value": 90.0 }, { "unit": "MassUnit.Kilogram", "value": 90.1 }, { "unit": "MassUnit.Kilogram", "value": 90.2 } ] }

For most applications this will work well. However, in scenarios where you need to send thousands of unit values in a JSON response, the default serialization is verbose. Looking around the for a solution I came across the idea of serializing the quantity and unit of measure as a string. Here is an example of what that’d look like for the same scenarios shown above.

{ "weight": "90.0|kg" }

Here is the array of values simplified in representation.

{ "weights": ["90.0|kg","90.1|kg","90.2|kg"] }

It is easy to see the savings in size of JSON going over the wire with this simplified representation. The example project is available on Github. The code is relatively simple, but I’ll outline a few things that may not be so obvious here…

• The serialization uses the abbreviation for a unit. If a unit has two abbreviations that are the same (think dictionary key violation), then this serializer will fail. • Swagger UI will show the full model for a UnitsNet unit unless you override this with a custom OpenApiSchema. • Swagger OpenApiSchema’s need to be provided per unit type. • This code does not handle collections of mixed unit types.

If you're using UnitsNet and need a lighter serializer pull the project and have a look.

Working with legacy code is often challenging. Lack of tests, odd formatting, old technology and dead code. A good developer will desire to improve the code through refactoring. Refactoring often involves removing or replacing curious bits of code. And this is where problems arise. In the timeless words of G.K. Chesterton:

“In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

It’s good to approach refactoring legacy code with a little humility. It’s likely that the software has been functioning as desired for over a decade. While the cleanliness of the code may be in question, its purpose is sound. Be careful when pulling bits of code that appear to have no execution path. These bits need diligent examination before removal.

One approach for refactoring legacy code is to add tests before refactoring. This will provide confidence that no critical bits got removed from the code. Writing good tests over legacy code involves a level of deep-thinking. By the time you have tests that cover the existing functionality, you’ll also have an idea of how to write a cleaner version. The other benefit is that you’ll be able to reuse your test cases for the new code you write.

If it is impossible to write tests for the code, be rigorous. Take the time needed to understand what’s going on. Read the code carefully, see if the comments appear relevant. Run the application and use the functionality for the bits needing refactored. Use the debugger and breakpoints. Often there is no substitute for debugging and stepping through odd bits of code. Seek out anyone in the organization that may know something about the functionality. Determine the use cases and write your new code with tests that fulfil the use cases. As you discover new use cases, add a representative test case.

There is an immense joy that comes from deleting nasty bits of code. Be sure you know its intent, or there could be a long path of suffering ahead.

Open Source Software (OSS) is pervasive in todays technology stacks. Most companies take advantage of OSS packages at some level in their architecture. Some OSS is packaged with commercial support like Elastic Search. Other OSS projects are small libraries hosted on GitHub and delivered through a package manager like NuGet or NPM. Either way most OSS projects are licensed to allow reuse of code. A book could be written on the assorted flavors of OSS licenses. It would be very dry, perfect bed time reading for lawyers. You can get a taste for this by looking at the licenses listed at the Open Source Initiative. OSS is by its very nature flexible and that's great.

When OSS licensing makes the news, the story usually goes something like this. A large commercial software company has licensed some code as an OSS product. Another commercial company uses that code, and breaks terms of the license agreement. Corporate lawyers start salivating and a legal war breaks out.

I've got a different story to relate. No legal lines are crossed in this story. In general this is more a matter of principals.

A few years back I was using proprietary software from ESRI. A reverse proxy was needed on the server for authenticating client applications. At the time ESRI provided a proxy for .net full framework & Java. Our project was using .net core which was entirely unsupported. ESRI had several users request on Github to create a new proxy for .net core.

The pain and complexity of all this is detailed in the Github threads below.

https://github.com/Esri/resource-proxy/issues/444 https://github.com/Esri/resource-proxy/issues/465 https://github.com/Esri/resource-proxy/issues/464

The TLDR version of the above events. Old proxy code was garbage, ESRI keeps asking for PR's to fix it. No support for .net core is in the ESRI roadmap. Not at this time. Sorry, you are on your own.

Finally, I cracked and wrote my own proxy that would plug into .net core. I put it out on Github and shared the link. The masses rejoice. Or at least 20 other people stuck using ESRI tech with .net core. I wrote a blog post about it at the time.

This is a cautionary tale. Thousands have benefited from the code. In considering the implications of all this it seems that ESRI benefited above all. ESRI is a company making over 1 billion dollars a year. OSS code bailed them out. Their customers continue to pay huge licensing fees. OSS developers plugging the holes in crummy commercial software aren't getting paid.

Github sponsors aims to address the issue of OSS developer compensation. For something like this to work it may require a shift in mindset of the OSS community. Code isn't free, some ones time and energy is being consumed to solve issues. Some are fortunate enough that an employer will pickup time spent on OSS projects, others aren't.

This experience has influenced how I think about OSS code. I'm much less interested in contributing code for commercial products with OSS tentacles.

At my day job, we are revamping our document generation processes. We need to generate complex business documents containing lots of text. One of the requirements was to allow a power user to add in tokens that get replaced with data. In C# we do this with string interpolation. Formatted strings are created using a special syntax {variable}:

string thing1 = "Thing 1";
string thing2 = "Thing 2";
var result = $"{thing1} and {thing2}";

In the above example the text inside the {} is replaced with a variable value, resulting in the text “Thing 1 and Thing 2”.

What we needed to do, was to allow similar functionality but at the level of an object. Each object property of a string data type could contain a templated string. The templated strings would then get filled with values from object properties.

Why? If you are still having a hard time imagining why you'd want/need to do this read the following.

StackOverflow: C# Reflection: replace all occurrence of property with value in text

Now, imagine you store your data in database and you serialize it to JSON. You hydrate a C# object from the JSON and have an object that looks like this.

public class Contract {
   public string Customer {get; set;}
   public DateTime PurchaseDate {get;set;}
   public string Product {get;set;}
   public string Quantity {get;set;}
   public string FinePrint {get;set;}
}

Your contract will always contain a section of fine print. The fine print needs the values from several fields in your object. This would be an easy thing to solve with string interpolation.

$“This contract is between {Customer} and business X. Your purchase of {Quantity} – {Product} shall be delivered 20 years from the data of purchase – sucker”

Let's expand on this hypothetical. Now imagine that your object looks like this.

public class Contract {
    public CustomerModel Customer {get; set;}
    public DateTime PurchaseDate {get;set;}
    public ProductModel Product {get;set;}
    public PriceModel Product {get;set;} 
    public QuantityModel Quantity {get;set;}
    public string FinePrint {get;set;}
    public List<string> Terms {get;set;}
}

Notice that we now have nested objects inside the Contract. Imagine that you don't control the fine print, the terms or anything else. You don't know which of these fields will need a value from any other. The users of your software want to control what data gets included in the fine print.

With ObjectTextTokens we can give the users control of text templating. All that's required is for them to know the object property structure and a simple syntax. For templating, we'll replace text been @ symbols. For object and property access we'll dot into things. @object.property@.

“This contract is between @customer.name@ and business X. Your purchase of @quantity.total@ – @product.name@ shall be delivered right away. The price at time of delivery will not exceed the agreed upon price of @price.total@”

Originally, I solved this issue with tiny little chunk of JavaScript on the client. Later I realized we needed values from several calculated fields server side. It's a pretty easy problem when you don't have to worry about types.

export function tokenator(object: Object) {
        return objectTokenatorIterator(object, object);
}

function objectTokenatorIterator(inputObj: Object, lookupObj: Object) {
        if (!inputObj) {        return;
    }
    Object.keys(inputObj).forEach((k, i) => {
        if (typeof inputObj[k] === "object") {
            return objectTokenatorIterator(inputObj[k], inputObj);
        } else {
            let fieldContents = inputObj[k] as string;
            let matchedTokens = fieldContents.toString().match(/(@\w*@|@\w*\.\w.*@)/g);
            if (matchedTokens && matchedTokens.length > 0) {
                matchedTokens.forEach(t => {
                    let fieldPath = t.replace("@", '').replace("@", "");
                    let content = getNestedObjProperty(fieldPath, lookupObj);
                    fieldContents = fieldContents.replace(t, content);
                });
                inputObj[k] = fieldContents;
            }
        }
    });
    return inputObj;
}

function getNestedObjProperty(path, obj) {
    return path.split('.').reduce(function (prev, curr) {
        return prev ? prev[curr] : null
    }, obj || self)
}

Moving this code over to the server side provided a bit more of a challenge. If this sounds like what you've been looking for, head on over to Github or download the NuGet package and check it out.

For a few years I've been refactoring a database centric application. The codebase is very large. The core application is ASP.NET Web Forms. Originally written in the time when all you needed for an app was third party user controls and a database. Our team has broken four microservices and a new UI project out from the original ASP.NET app. This is were todays story begins.

If you work on a monolithic application with one project/solution file, you can open your project and get to work. You don't have to think about what projects you need to have running for the application to work. Moving to microservices solves some issues and it creates a few. Debugging becomes a pain. How do you best debug microservices? Searching online, I've pulled up some Stack Overflow posts asking this same question.

Debug multiple MicroServices in Visual Studio Local Development Experience when Working with MicroServices

A list of conclusions: – Use Docker – Write unit tests, and integration tests – Log everything then read the log files

If all our projects lived in .net core, Docker might have worked for us. The core application being Web Forms requires a large Windows Server Docker image. The only way to run the app in that container is with a full IIS instance. Debugging requires attaching to the running process manually each time. Not a very slick process. Docker Compose looks great, so once we have everything moved to .net core that might be a better solution.

We write unit tests and integration tests, but sometimes you need to connect some UI and debug some code. Log files are great, but they don't replace the debugger.

If the codebase you are working on is well architected, you may be able to debug micro services in isolation. Our codebase has enough seams that we don't need to run all the microservices at once. Yet, we can rarely do a debug session with only one running.

The obvious answer is to open all the projects in Visual Studio and start debugging. How good are the specs on your dev machine? One Visual Studio instance can take over a gig of ram. The difference between starting one of our projects in Visual Studio vs. donet run was 1.3 gigs vs. 450mb.

For our situation a hybrid approach would be best. Some way to select which code to debug in Visual Studio, and which to run in the background to support debugging.

Our team created a small console application to launch or projects. The first few iterations were cumbersome. With a few tweaks it grew into a decent yet simple solution. If you're interested its out on GitHub. The launcher is targeted to Visual Studio, but the dev environment is configurable. I haven't tried it, but I'm guessing configuring it for Visual Studio code should work as well.

For now this is meeting our needs. As we continue to refactor we'll continue to evaluate how this is working. If you're in a similar boat with debugging microservices, give it a try. Or if you have a better solution leave a comment and clue me in.

Over the last few years, low code software development has risen in prominence. Forbes published an article detailing the virtues of low code systems. Major money has been invested in low code platforms by investors. Large corporations are embracing low code systems. CTO’s and managers are adding “Low Code” to their buzzword vocabulary lists. It's right next to terms like “Agile” and “Digital Enablement”. It's all the rage, but does it live up to the hype?

Let’s start with the basic forensic principal of following the money. Who stands to gain the most from low code development? The purveyors of low code development systems. If you want to see how many different systems are available go, do a Google search. I don’t want to give the charlatans free advertising. How much does a company pushing a low code platform care about your bottom line? Are they invested in the success of your company?

Let’s consider two companies wanting to buy low code systems.

(A) A company with an existing development team that wants faster project turnaround. And development teams are expensive. They see this as a way to lower cost and go fast. Perhaps less developers using low code systems.

(B) A company without internal development capability. They see that their business needs custom software. They don't want to hire developers because of the cost. They want IT and all the other techy people in the organization to build the applications.

Technical sales will market low code systems to the upper echelon of an organization. This is a smart move on the part of sales people, target the uninformed. However, remember this.

The opinion of 10,000 men is of no value if none of them know anything about the subject. -Marcus Aurelius

Low code systems aren’t of much interest to skilled professional developers. Unless of course you're a developer working for a company that sells a low code platform. Let’s take a look at how these systems are marketed.

Low code systems marketing points:

  • Faster development
  • Citizen developers
  • Lower cost
  • Reusability
  • Drag and Drop Coding
  • Extensible

Let’s now look a little closer at the bullet points keeping company (A) and (B) in mind.

Faster development: We live in a time when our applications update continuously. Our applications are changing rapidly and continuously. Many things have enabled software to develop at a rapid pace. I’ll draw your attention to just a few.

  • CI/CD Pipelines
  • Package Managers (NuGet, NPM, PIP) etc.
  • Agile Development Processes
  • Unit/Integration/UI testing
  • Incredible IDE’s

Regardless of the size of your development team, they can take advantage of any of the above technologies to increase their velocity. If you’re development team isn’t moving fast enough, perhaps they need training. Or you need to replace a few developers.

Citizen developers: This term means redirecting productive employees to coding. They will need to spend time learning how to code in a low code environment. They may not be typing code, but they’ll still have to learn many programming concepts. There is a hitch though. Not everyone can code. But a coder can come from anywhere.

‘In the past, I have made no secret of my disdain for Chef Gusteau's famous motto: Anyone can cook. But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere.’ – Anton Ego – Ratatouille

A company is rolling the dice when it comes to citizen developers. You don't know what you're getting. There are a few other issues too. How do you keep your citizen developers from deleting every record in the database? Etc. Eventually a company will realize that citizen development isn't working. At that point they'll start looking for a low code contractor. Yes, buy a low code system, then go look for very small pool of developers who specialize in that system. Scarcity will drive price.

One of the traits that makes a good developer valuable is that they are looking for the best ways to solve a problem. Many solutions aren’t technical, but rather business process. Often the collaboration of a subject matter expert and developer yields great results. The type of thinking required to make good software can also be used to improve other areas of the business. Do citizen developers have these skills? Maybe?

Lower Cost: The immediate questions is lower cost over what timespan? Lower initial cost? Perhaps, but what about the long term cost? Once code gets locked into a proprietary system, you have other things to think about. What if the low code platform provider goes out of business? What is the initial cost? What about yearly service fees? What's the cost to find that small pool of specialized contractors who know how to use the low code system. Remember the old adage. You get what you pay for.

Low code systems have more layers of abstraction. This can result in slower code. Perhaps small inefficiencies, but if you’re a large business slow apps can burn time and time is money.

Reusable Code: I’m not sure why this is even touted as a feature. Pretty much all coding paradigms build on reusability.

Drag and Drop Coding: Drag and drop coding is an interesting feature. Drag and drop code interfaces are being used to teach children the fundamentals of logic. This is a great use case, and as of right now the only valid one.

Extensibility: Not everything will fit into the well-defined box of low code widgets. When you can't build what you need to in your low code environment, you have to write code. This is where things get ugly. You have to write code that conforms to the interfaces of the low code platform. Hopefully you can do what you need to without jumping through dozens of layers. Most low code developers will likely be pretty lost at this point. Get out your checkbook again.

Summary Let’s get back to the two companies that I mentioned earlier. What will happen if company (A) buys a low code environment? It’s likely that any decent developer will leave. Development resumes don’t thrive on low code. The developers left at the company (the ones who made things slow) will continue to go slow with low code. Thus company (A) will have achieved one of its goals, less developers.

If company (B) purchases a low code system they’ll soon find themselves dealing with an uprising in IT. To combat the uprising they’ll be forced into finding contractors to code things for them. They’ll have the disadvantage of locked in technologies. Their dreams will never be realized.

Alternative to Low Code: The alternative to low code systems is to embrace the development process. If you can hire internal developers. If that’s not an option, look to any number of reputable code consulting companies.

A company’s problem solving ability may well define its future value. If you are looking for some stocks to short, look on the web at any low code provider’s testimonials section. That’s a good list to bet against.

Recently I loaded a NuGet package into my Visual Studio project, only to be greeted with the following error.

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first

A little searching yielded a post detailing PowerShell security over on Hanselman's blog. This post got me headed in the right direction to solve the issue.

The NuGet package I tried to install was running an unsigned PowerShell script used to add a few code files into the project. Note that before you do the following you need to trust the author of the script. Understand the implications involved in lowering the PowerShell security model. If you are comfortable that the script being ran is not malicious, proceed.

  • Close Visual Studio
  • Open PowerShell as an administrator
  • "Set-ExecutionPolicy" Unrestricted
  • Open Visual Studio
  • Attempt NuGet install again
  • In PowerShell "Set-ExecutionPolicy" AllSigned

Note: Do not leave the execution policy unrestricted! Also, you will need to leave PowerShell running while you do this, or you'll get the same error.

Enter your email to subscribe to updates.