Waldo Codes

Pragmatic insights on software craftsmanship and other topics.

At my day job, we are revamping our document generation processes. We need to generate complex business documents containing lots of text. One of the requirements was to allow a power user to add in tokens that get replaced with data. In C# we do this with string interpolation. Formatted strings are created using a special syntax {variable}:

string thing1 = "Thing 1";
string thing2 = "Thing 2";
var result = $"{thing1} and {thing2}";

In the above example the text inside the {} is replaced with a variable value, resulting in the text “Thing 1 and Thing 2”.

What we needed to do, was to allow similar functionality but at the level of an object. Each object property of a string data type could contain a templated string. The templated strings would then get filled with values from object properties.

Why? If you are still having a hard time imagining why you'd want/need to do this read the following.

StackOverflow: C# Reflection: replace all occurrence of property with value in text

Now, imagine you store your data in database and you serialize it to JSON. You hydrate a C# object from the JSON and have an object that looks like this.

public class Contract {
   public string Customer {get; set;}
   public DateTime PurchaseDate {get;set;}
   public string Product {get;set;}
   public string Quantity {get;set;}
   public string FinePrint {get;set;}

Your contract will always contain a section of fine print. The fine print needs the values from several fields in your object. This would be an easy thing to solve with string interpolation.

$“This contract is between {Customer} and business X. Your purchase of {Quantity} – {Product} shall be delivered 20 years from the data of purchase – sucker”

Let's expand on this hypothetical. Now imagine that your object looks like this.

public class Contract {
    public CustomerModel Customer {get; set;}
    public DateTime PurchaseDate {get;set;}
    public ProductModel Product {get;set;}
    public PriceModel Product {get;set;} 
    public QuantityModel Quantity {get;set;}
    public string FinePrint {get;set;}
    public List<string> Terms {get;set;}

Notice that we now have nested objects inside the Contract. Imagine that you don't control the fine print, the terms or anything else. You don't know which of these fields will need a value from any other. The users of your software want to control what data gets included in the fine print.

With ObjectTextTokens we can give the users control of text templating. All that's required is for them to know the object property structure and a simple syntax. For templating, we'll replace text been @ symbols. For object and property access we'll dot into things. @object.property@.

“This contract is between @customer.name@ and business X. Your purchase of @quantity.total@ – @product.name@ shall be delivered right away. The price at time of delivery will not exceed the agreed upon price of @price.total@”

Originally, I solved this issue with tiny little chunk of JavaScript on the client. Later I realized we needed values from several calculated fields server side. It's a pretty easy problem when you don't have to worry about types.

export function tokenator(object: Object) {
        return objectTokenatorIterator(object, object);

function objectTokenatorIterator(inputObj: Object, lookupObj: Object) {
        if (!inputObj) {        return;
    Object.keys(inputObj).forEach((k, i) => {
        if (typeof inputObj[k] === "object") {
            return objectTokenatorIterator(inputObj[k], inputObj);
        } else {
            let fieldContents = inputObj[k] as string;
            let matchedTokens = fieldContents.toString().match(/(@\w*@|@\w*\.\w.*@)/g);
            if (matchedTokens && matchedTokens.length > 0) {
                matchedTokens.forEach(t => {
                    let fieldPath = t.replace("@", '').replace("@", "");
                    let content = getNestedObjProperty(fieldPath, lookupObj);
                    fieldContents = fieldContents.replace(t, content);
                inputObj[k] = fieldContents;
    return inputObj;

function getNestedObjProperty(path, obj) {
    return path.split('.').reduce(function (prev, curr) {
        return prev ? prev[curr] : null
    }, obj || self)

Moving this code over to the server side provided a bit more of a challenge. If this sounds like what you've been looking for, head on over to Github or download the NuGet package and check it out.

For a few years I've been refactoring a database centric application. The codebase is very large. The core application is ASP.NET Web Forms. Originally written in the time when all you needed for an app was third party user controls and a database. Our team has broken four microservices and a new UI project out from the original ASP.NET app. This is were todays story begins.

If you work on a monolithic application with one project/solution file, you can open your project and get to work. You don't have to think about what projects you need to have running for the application to work. Moving to microservices solves some issues and it creates a few. Debugging becomes a pain. How do you best debug microservices? Searching online, I've pulled up some Stack Overflow posts asking this same question.

Debug multiple MicroServices in Visual Studio Local Development Experience when Working with MicroServices

A list of conclusions: – Use Docker – Write unit tests, and integration tests – Log everything then read the log files

If all our projects lived in .net core, Docker might have worked for us. The core application being Web Forms requires a large Windows Server Docker image. The only way to run the app in that container is with a full IIS instance. Debugging requires attaching to the running process manually each time. Not a very slick process. Docker Compose looks great, so once we have everything moved to .net core that might be a better solution.

We write unit tests and integration tests, but sometimes you need to connect some UI and debug some code. Log files are great, but they don't replace the debugger.

If the codebase you are working on is well architected, you may be able to debug micro services in isolation. Our codebase has enough seams that we don't need to run all the microservices at once. Yet, we can rarely do a debug session with only one running.

The obvious answer is to open all the projects in Visual Studio and start debugging. How good are the specs on your dev machine? One Visual Studio instance can take over a gig of ram. The difference between starting one of our projects in Visual Studio vs. donet run was 1.3 gigs vs. 450mb.

For our situation a hybrid approach would be best. Some way to select which code to debug in Visual Studio, and which to run in the background to support debugging.

Our team created a small console application to launch or projects. The first few iterations were cumbersome. With a few tweaks it grew into a decent yet simple solution. If you're interested its out on GitHub. The launcher is targeted to Visual Studio, but the dev environment is configurable. I haven't tried it, but I'm guessing configuring it for Visual Studio code should work as well.

For now this is meeting our needs. As we continue to refactor we'll continue to evaluate how this is working. If you're in a similar boat with debugging microservices, give it a try. Or if you have a better solution leave a comment and clue me in.

Over the last few years, low code software development has risen in prominence. Forbes published an article detailing the virtues of low code systems. Major money has been invested in low code platforms by investors. Large corporations are embracing low code systems. CTO’s and managers are adding “Low Code” to their buzzword vocabulary lists. It's right next to terms like “Agile” and “Digital Enablement”. It's all the rage, but does it live up to the hype?

Let’s start with the basic forensic principal of following the money. Who stands to gain the most from low code development? The purveyors of low code development systems. If you want to see how many different systems are available go, do a Google search. I don’t want to give the charlatans free advertising. How much does a company pushing a low code platform care about your bottom line? Are they invested in the success of your company?

Let’s consider two companies wanting to buy low code systems.

(A) A company with an existing development team that wants faster project turnaround. And development teams are expensive. They see this as a way to lower cost and go fast. Perhaps less developers using low code systems.

(B) A company without internal development capability. They see that their business needs custom software. They don't want to hire developers because of the cost. They want IT and all the other techy people in the organization to build the applications.

Technical sales will market low code systems to the upper echelon of an organization. This is a smart move on the part of sales people, target the uninformed. However, remember this.

The opinion of 10,000 men is of no value if none of them know anything about the subject. -Marcus Aurelius

Low code systems aren’t of much interest to skilled professional developers. Unless of course you're a developer working for a company that sells a low code platform. Let’s take a look at how these systems are marketed.

Low code systems marketing points:

  • Faster development
  • Citizen developers
  • Lower cost
  • Reusability
  • Drag and Drop Coding
  • Extensible

Let’s now look a little closer at the bullet points keeping company (A) and (B) in mind.

Faster development: We live in a time when our applications update continuously. Our applications are changing rapidly and continuously. Many things have enabled software to develop at a rapid pace. I’ll draw your attention to just a few.

  • CI/CD Pipelines
  • Package Managers (NuGet, NPM, PIP) etc.
  • Agile Development Processes
  • Unit/Integration/UI testing
  • Incredible IDE’s

Regardless of the size of your development team, they can take advantage of any of the above technologies to increase their velocity. If you’re development team isn’t moving fast enough, perhaps they need training. Or you need to replace a few developers.

Citizen developers: This term means redirecting productive employees to coding. They will need to spend time learning how to code in a low code environment. They may not be typing code, but they’ll still have to learn many programming concepts. There is a hitch though. Not everyone can code. But a coder can come from anywhere.

‘In the past, I have made no secret of my disdain for Chef Gusteau's famous motto: Anyone can cook. But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere.’ – Anton Ego – Ratatouille

A company is rolling the dice when it comes to citizen developers. You don't know what you're getting. There are a few other issues too. How do you keep your citizen developers from deleting every record in the database? Etc. Eventually a company will realize that citizen development isn't working. At that point they'll start looking for a low code contractor. Yes, buy a low code system, then go look for very small pool of developers who specialize in that system. Scarcity will drive price.

One of the traits that makes a good developer valuable is that they are looking for the best ways to solve a problem. Many solutions aren’t technical, but rather business process. Often the collaboration of a subject matter expert and developer yields great results. The type of thinking required to make good software can also be used to improve other areas of the business. Do citizen developers have these skills? Maybe?

Lower Cost: The immediate questions is lower cost over what timespan? Lower initial cost? Perhaps, but what about the long term cost? Once code gets locked into a proprietary system, you have other things to think about. What if the low code platform provider goes out of business? What is the initial cost? What about yearly service fees? What's the cost to find that small pool of specialized contractors who know how to use the low code system. Remember the old adage. You get what you pay for.

Low code systems have more layers of abstraction. This can result in slower code. Perhaps small inefficiencies, but if you’re a large business slow apps can burn time and time is money.

Reusable Code: I’m not sure why this is even touted as a feature. Pretty much all coding paradigms build on reusability.

Drag and Drop Coding: Drag and drop coding is an interesting feature. Drag and drop code interfaces are being used to teach children the fundamentals of logic. This is a great use case, and as of right now the only valid one.

Extensibility: Not everything will fit into the well-defined box of low code widgets. When you can't build what you need to in your low code environment, you have to write code. This is where things get ugly. You have to write code that conforms to the interfaces of the low code platform. Hopefully you can do what you need to without jumping through dozens of layers. Most low code developers will likely be pretty lost at this point. Get out your checkbook again.

Summary Let’s get back to the two companies that I mentioned earlier. What will happen if company (A) buys a low code environment? It’s likely that any decent developer will leave. Development resumes don’t thrive on low code. The developers left at the company (the ones who made things slow) will continue to go slow with low code. Thus company (A) will have achieved one of its goals, less developers.

If company (B) purchases a low code system they’ll soon find themselves dealing with an uprising in IT. To combat the uprising they’ll be forced into finding contractors to code things for them. They’ll have the disadvantage of locked in technologies. Their dreams will never be realized.

Alternative to Low Code: The alternative to low code systems is to embrace the development process. If you can hire internal developers. If that’s not an option, look to any number of reputable code consulting companies.

A company’s problem solving ability may well define its future value. If you are looking for some stocks to short, look on the web at any low code provider’s testimonials section. That’s a good list to bet against.

Recently I loaded a NuGet package into my Visual Studio project, only to be greeted with the following error.

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first

A little searching yielded a post detailing PowerShell security over on Hanselman's blog. This post got me headed in the right direction to solve the issue.

The NuGet package I tried to install was running an unsigned PowerShell script used to add a few code files into the project. Note that before you do the following you need to trust the author of the script. Understand the implications involved in lowering the PowerShell security model. If you are comfortable that the script being ran is not malicious, proceed.

  • Close Visual Studio
  • Open PowerShell as an administrator
  • "Set-ExecutionPolicy" Unrestricted
  • Open Visual Studio
  • Attempt NuGet install again
  • In PowerShell "Set-ExecutionPolicy" AllSigned

Note: Do not leave the execution policy unrestricted! Also, you will need to leave PowerShell running while you do this, or you'll get the same error.

Learning to code can feel overwhelming. In this post I'll outline a few strategies for success. If you have ever learned to play a musical instrument you would understand the need to practice. Coding is the same way. You will learn best by writing code.

I often need to learn new things. My favorite technique is immersion. Jump in, and work on small goals. Overcoming small obstacles gives me motivation to keep learning. You may find it works well for you too. Let's look at an example of how that works.

To learn a new programming language (Python), I would first find my goal. Something easy enough to understand, yet complex enough to drive your curiosity. Often I choose to write a simple command line game. For example, Hangman. Do not choose complex goals while learning. If you do your mental energy will be wasted thinking through the details of how to reach your goal. Keep it simple.

Once you have your goal, break it down into small parts and focus on one part at a time. Write down a list of what you need to make a Hangman game.

Hangman Component List – Draw a little stick person a part at a time – Word list to randomly choose from – Get user input – Display output to the user – Limit user letter guesses to the number of stick person parts – Winning: user guesses the word, before using all guesses – Loosing: more letter guesses than the number of parts in the stick person

At this point you can pick one little thing to focus on. Perhaps start by figuring out how to code up a list of words. Next figure out how to make a function to randomly return one word. Then look into how to get input from a user.

As you work on each part of your goal, you will be forced to research things. – How do you make a list of words in Python? – How do you randomly select an item from a list in Python? – How do I get user input from the command line?

This method focuses mental energy into accomplishing small wins. During the process of researching jot down things that you are curious about. Go back and read more on those things. If you are learning something entirely new, this will quickly become a large list. Focus on the small easy wins. How do you eat an elephant? One piece at a time!

Focusing on small steps helps to avoid becoming overwhelmed. Researching and reading on topics you don't understand will quickly build your knowledge. Wisdom will come as you continue to write code. Get started, then stick with it!

Several years ago, I read Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin also known as “Uncle Bob”. If you haven't read it, you might consider picking up a copy. It inspired me with a desire to write cleaner higher-quality code. The book is not technical, rather it focuses on the processes and mindset necessary to create clean code.

Uncle Bob has also created a video series that goes beyond the material covered in his book. The videos dive into technical aspects of how to write clean code. I started watching the video series from the beginning. The format is entertaining and the material is thought-provoking.

So far I've watched four episodes and I've been thoroughly entertained and inspired. Episode 1 talks about how we write code for humans, not just machines. The second episode talks about class, and function names. The third and fourth episodes dig into functions.

If you're looking for an entertaining way to get knowledge this may keep your attention. I look forward to carving out time to watch more Clean Code videos!

Dependency Injection (DI) is a very useful pattern. It makes a class's dependencies obvious. If dependencies are not provided, the class can't be created. When testing, it allows the internal dependencies of a class to be swapped out for mock objects. It is a best practice when trying to write decoupled testable code.

When using the DI pattern in a codebase you would wire together all the dependencies in the application root. To avoid manually wiring up dependencies you may turn to a DI container framework such as AutoFac or Ninject. DI frameworks handle all the heavy lifting of wire-up.

At this point you may be wondering what this has to do with creating a published code API? When you consume an complex API as a user, you don't want to have to wire up all the internal details. Here is a contrived example.

// This will drive an API consumer crazy
public class TheAPIConsumersClass {
    Airport WireUpTheShinyNewAirportAPI() {
       var runwayLights = new Array<BrightLight>() { new BrightLight() }
       var radar = new RadarUnit();
       var controlTower = new ControlTower(radar);
       var airport = new Airport(airplane, controlTower);
       return airport;

 // All your user really wants to do is this something like this...
public class TheAPIConsumersClass {
    Airport WireUpTheShinyNewAirportAPI() {
       var airport = new Airport();

If you don't hide the wire-up details from the users, you may wish you did when they start calling with questions. There are a few things could be done here. One is to create a factory class for users to call. For a very complex dependency graph, this might be your best option. Here is another approach. Mark the constructor with dependency injection as internal. Have a public constructor that calls the internal constructor and provides the required dependencies. This way you can get the benefit of DI for testing, while still providing a simple constructor for consumers.

public class MyApi {
   public MyApi() : this(new Thing1(), new Thing2()) {  }
   internal MyApi(Thing1 thing1, Thing2 thing2) {
          _thing1 = thing1;
          _thing2 = thing2;

Overall, when creating an API consumed by others be kind. Don't require them to do the complex dependency graph wire-up. Provide top level classes that hide details and make the library simple to use. Keep your DI, just don't make it public. In this case you can have your cake and eat it too.

AutoMapper is useful tool. Saves serious time in the mapping of object properties between different classes. Objects are mapped when moving between layers of your application. This helps to maintain clean separation between the layers.

Like any other framework that has been around for a while it's undergone some changes. For several years AutoMapper shipped with a static API. Jimmy Bogard (creator of AutoMapper) has a few posts on his blog that gives the details.

AutoMapper and IOC

More recently the static API has been removed from AutoMapper.

Removing the static API from AutoMapper

In our codebase we moved to the newer version of AutoMapper with the IoC friendly API. It brought a few issues to the light.

Where does the Map Function live? The obvious answer is you would have a line either a controller or a command. In this case moving from the static API to a mapping service isn't a huge leap. It only requires injecting a mapper instance.

// Maps from Domain Object to DTO
OrderDto dto = Mapper.Map<Order, OrderDto>(order);
OrderDto dto = _mapper.Map<Order, OrderDto>(order);

Backing up a little, how would you do the mapping without AutoMapper? A simple map function would do the trick.

public static OrderDto Map(Order obj) {
    '''... // all the mapping
    return oderDto;

That map function only requires the objects being mapped. No external dependencies. It's pass through code for unit tests. That function can live almost anywhere in the code base. Though I'd argue there are more and less correct places you'd expect to find it.

In our codebase I developed a pattern with generics to handle add and update of data for CRUD functions. Two interfaces define the mappings for add and update. IMapTo used for add and IMapOnto used for updates. If a model implements either or both of these interfaces I can wire up a generic command that does all the hard work. Code reuse at its best.

IoC in my Models No thanks. While the static interface made this plausible. The IoC interface makes this a little painful. The mapper instance could be passed into the model function via parameter injection. That felt messy and required injection into the top level. I've created a static factory that returns my mapping instance.

 public Order MapOnTo(Order state) {
     var mapper = new    MapperFactory().CreateMapper<OrderMappingProfile>();
     return mapper.Map(this, state);

For now this works fine, but serves as an example of why the static mapper had its place in the API. RIP static mapper :(

Other Options It is also worth mentioning that there are other mapping frameworks that you might want to look at. The most interesting that I've seen to date is Mapster

For the past year and a half, I've been working on a large GIS web application. At its backbone is ESRI technology. Here is a brief overview of our tech stack.

ArcGIS Server hosts service endpoints that are the API for our client application. The application is built with ASP.Net MVC and houses an Angular SPA. The MVC part of the application hosts a proxy . The proxy enables secure communication with the ArcGIS Server.

The ESRI Resource Proxy was written a while back. It's packaged in a .ashx file with several methods going beyond a hundred lines of code. While there is no doubt that this is a marvel of ingenuity it's glory days where likely over 10 years ago. You don't have to read far before realizing that some very basic refactoring would do this proxy a world of good.

Early in development of our system, I had forked and modified the proxy. We had some issues in our environment that the proxy couldn't handle. With OAuth2 flows the proxy assumed that the token endpoint would allow anonymous traffic. When ArcGIS Server used Integrated Windows Authentication (IWA) to secure the token endpoint, the proxy does not pass credentials. This a generally reasonable assumption, but not true in our environment. I forked the code and set about to fix it.

In the process I discovered several other issues. One such issue was the code calling an authorization endpoint with incorrect parameters. The code doesn't have unit tests. Reading the issues section on Github makes one loose confidence in all 1250 lines of proxy code. Now that I've presented a picture of the code quality, lets get into the more bizarre.

ESRI products aren't open source. The license costs are rather high. If you build an application with ArcGIS server and a client side code, as of now you need the proxy. It is not an optional chunk of the stack.

ESRI has taken the position on Github that the proxy is a community supported effort. Furthermore, they have stated that they will not be creating a proxy for .net core. Not much point to customer support when you have a monopoly.

We migrated our code from full framework to ASP.Net core. In the process I did something the ESRI team should have done. I wrote a resource proxy for .net core. You can find it on Github and as a NuGet package.

The .Net Core proxy includes a built in memory cache. If you are running a load balancer, fork the project and replace the cache provider. Or submit a PR to make the cache provider configurable. :) The code has unit tests, and uses the new HttpClientFactory in .Net Core.

If you can manage getting away from ESRI products do it!

Happy Coding!

(Update): ESRI has since updated their Github proxy page to outline new approaches. Though they don't give enough information to really understand what any of the approaches mean.

Auth0 is a great solution for authentication. Swagger-UI is great for kicking the tires on your API. If your using .Net you can pull in Swashbuckle, which is a .Net wrapper of Swagger. In development I use Swagger often and I found that the Authorize step was tedious. I would use another API client like Postman to call Auth0 API. Executing an implicit grant flow, in Auth0 yielded an Auth token which is copied to the clipboard. Then I'd click the Authorize button in Swagger and type Bearer and paste in my token. Exhausting! In this post I will show you a little trick that will make life simpler.

The Authorize button in the top right corner of the Swagger page is configurable. The sad part is that currently Swagger-UI 3.17.6 doesn't play well with Auth0. Short story is Swagger does not support the passing of an audience parameter. Here is a Github issue with the details.

Given the situation with Swagger-UI. I thought of forking Swashbuckle and patching things up. This seemed tedious and I tend to fork only as a last resort. I settled on a pragmatic but not all that clever solution. I realized Swashbuckle would let me replace the version of Swagger-UI it comes packaged with. That would let me add in a little hack to create a cleaner authorization workflow.

In the image below you can see an extra button in the UI. [Get Auth Token]. This button hits the API endpoint which redirects to Auth0. The user logs in, and is redirected back to the Swagger-UI endpoint. The token is in the URL, and gets extracted and shown in a prompt for the user to copy to the clipboard. The user then clicks the Swagger Authorize button. When the Swagger Auth dialog appears they paste the clipboard contents into it. This is much quicker!

The secret to getting this working is Swashbuckle allows you to specify a new index file. Download the Swagger-UI source from Github and keep the following files. Set the index files build action to embedded resource in Visual Studio.

  • favicon-16x16.png
  • favicon-32x32.png
  • index.html
  • oauth2-redirect.html
  • swagger-ui.css
  • swagger-ui-bundle.js
  • swagger-ui-standalone-preset.js

Replace the body of the code in index with the code body of the index file from this Gist. Note: the rest of the code you'll need to wire this in should also be in the Gist. If your using Swashbuckle override the default index with your modified file by setting the IndexStream in the config.

c.IndexStream = () => GetType().GetTypeInfo().Assembly.GetManifestResourceStream("Project.API.Swagger.index.html");

Hopefully, someday something similar to this will be supported natively in Swagger.

Happy Coding!

Enter your email to subscribe to updates.