Waldo Codes

Pragmatic insights on software craftsmanship and other topics.

My reading list is full of books that in some ways have inspired me. I’ve had a unique privilege lately of reading a book written by my uncle. It is now on the reading list. The book title is “The Art of the Question: How to Expand Business Ideas, Strengthen Relationships, and Lead Any Conversation”. The title is a decent summary, but I’ll expand on that.

After reading through the book, I can categorize it as a “Business & Self Help” book. It is similar to books like “7 Habits of Highly Effective People – Covey” and “How to Win Friends and Influence People – Carnegie“. The big difference is the focus of this book is very narrow. It's all about questions. I use questions often in conversation, but I had not considered all the facets of question use. This book provides that valuable insight.

Building relationships is a thread that runs throughout the book. Relationships are important in personal life and business. Questions are one of the primary ways we show interests in the lives of others. The book cautions us to examine our motives when asking questions. It looks at personality differences as well. By building on common ground and true respect, often these differences can be set aside. Aspects of trust, emotions and the feeling of safety in relationships are also examined.

The book is targeted toward board rooms and business leaders. It zeros in on how to have productive meetings. Meetings where things don't dissolve into chaos. These tips are helpful for any sort of meeting. When reading I like to ask how does this lesson apply to me? Over the years as a software engineer, I've embraced Domain Driven Design (DDD). I've found that DDD tends to provide a greater value to the business than other techniques. Why? Because DDD focuses first on understanding the businesses core knowledge domain. The process of building great software for a business is all about understanding. Developers need to understand what to build and that is all about asking the right questions.

Asking the right questions helps arrive at the right solution. Breaking down a business domain to codify it, often results in lots of questions. I've worked on projects where the right questions bring to light process improvements. Process change is often cheaper and more impactful than improved software. The right line of questioning pulls out the non-technical commonsense solutions. These moments are delightful and a win for the business as software isn’t cheap to create and maintain.

Life itself is a process of continual improvement. We try things, we fail, we learn and hopefully we get better. Reading lets us build on the wisdom of others. It provides insight that otherwise may take many lifetimes to gain. It’s not always easy to self-reflect and target areas for improvement, but it’s always worthwhile. The Art of the Question, will give you a gentle nudge in that direction.

Hey! Where’s the context?

If you're not familiar with Hangfire, it's a background job runner. Long running tasks enqueue to a temporary storage medium, then dequeued and processed. If you've ever used Hangfire you may have found out that your background code lost access to the app context. This post will outline the issue and one possible solution.

Let's first take a look at how Hangfire works. It serializes the parameters passed to the function you call to run on a background thread. As jobs are ran, job parameters get deserialized and passed back into your function.

Hangfire is running your code on a thread outside the context of your application. Ambient contexts such as HttpContext are unavailable. This is the reason you can't access the apps context from within a background job.

One simple solution is to pass values from the context into your job as a parameter. This is simple, but if you have a large application with several background tasks it may not be a good solution. Do you want to add another parameter to all Hangfire jobs?

There is another strategy we can use to get context data into Hangfire jobs. Hangfire, like mediatr or the .net webstack has a pipeline. Using filters provided in the Hangfire pipeline, job data can be enhanced inline.

Filters

Pipeline actions are abstracted to just two interfaces: IClientFilter and IServerFilter. When jobs are enqueued, filters of type IClientFilter are invoked.

/// <summary> Client filter runs before the Hangfire job runs. </summary>
public class ClientExampleFilter : IClientFilter {
  private readonly IUserService _userService;

  public ClientExampleFilter(IServiceScopeFactory serviceScopeFactory) {
    var scopeFactory = serviceScopeFactory.CreateScope();
    _userService = scopeFactory.ServiceProvider.GetService<IUserService>();
  }

  public void OnCreated(CreatedContext filterContext) {}

  public void OnCreating(CreatingContext filterContext) {
    if (filterContext == null) {
      throw new ArgumentNullException(nameof(filterContext));
    }

    var user = filterContext.GetJobParameter<string>("User");

    if (string.IsNullOrEmpty(user)) {
      var userVal = _userService.GetCurrentUser();
      filterContext.SetJobParameter("User", userVal);
    }
  }
}

Filter values are pushed into jobs as parameters when enqueued and picked back up when ran. Jobs filters can inherit from JobFilterAtrribute and be scoped per job. Or they can be registered globally.

The interface for IServerFilter matches that of IClient filter. The only difference is when the filter runs. IServerFilter implementations are ran when the job is invoked. Look at the client example and then imagine the IServerFilter code ;^)

The Hangfire documentation gives an example of filters, it’s worth a read. However, it does not give an example of how to handle dependency injection. In the example above note that IServiceFactoryScope gets injected. A new scope is created from that factory, and services are resolved off that scope. This needs done because of how dependency resolution works in the Hangfire pipeline. Resolving scoped services directly will result in error.

Note: Services are scoped to the lifetime of the Hangfire job. If you have nested background jobs, the nested jobs hitting the filter will have a null context.

Activators

Hangfire, has another concept called an activator. Activators, like filters may be set per job or registered globally. They give access to the jobs context, parameters, method invoked, and the DI scope used for the job. We can use the activator to pull values out of the jobs parameters and do something with it. Below is an example where a job parameter is pulled from the current job and used to set a property on a scoped service.

  /// <summary>
/// Handles DI injection activation.
/// </summary>
public class ExampleJobActivator : AspNetCoreJobActivator {
  public ExampleJobActivator(
      [ NotNull ] IServiceScopeFactory serviceScopeFactory)
      : base(serviceScopeFactory) {}

  public override JobActivatorScope BeginScope(JobActivatorContext context) {
    var scope = base.BeginScope(context);

    var user = context.GetJobParameter<string>("User");

    var userProvider = (IUserProvider) scope.Resolve(typeof(IUserProvider));
    userProvider.CurrentUser = user;

    return scope;
  }
}

Here you can see we are overriding AspNetCoreJobActivator. If you aren’t using .net core, you may need to override JobActivator. If that is the case, you may also need to override JobActivatorScope.

Configuring Hangfire

In ASP.Net Core, Hangifre is configured with a set of extensions methods. These methods dangle off IServiceCollection. This familiar builder pattern makes setup simple. I was unable to find an example that showed how to get access to the DI container. Thankfully, it’s simple. The method AddHangfire() has an override that includes ‘provider’. This gives access to the DI container allowing IServiceScopeFactory to be injected into the filter and activator instances.

/// <summary>
/// Hangfire Service Config
/// </summary>
/// <param name="services"></param>
public void ConfigureHangfireServices(IServiceCollection services) {
  services.AddHangfire(
      (provider, config) =>
          config
              .UseFilter(new ExampleFilter(
                  (IServiceScopeFactory)
                      provider.GetService(typeof(IServiceScopeFactory))))
              .UseActivator(new ExampleJobActivator(
                  (IServiceScopeFactory)
                      provider.GetService(typeof(IServiceScopeFactory))))
              .UseSqlServerStorage(
                  Configuration.GetConnectionString("HangfireDb"),
                  new SqlServerStorageOptions{
                      CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
                      UseRecommendedIsolationLevel = true,
                      DisableGlobalLocks = true}));

  services.AddHangfireServer();
}

This example is basic. Give it some thought and see if carrying some context data through to Hangfire could improve your codebase.

UnitsNet is a popular framework for unit of measure conversion in applications. One nice aspect of the framework is that it handles unit serialization to JSON. The documentation gives an example of a unit of measure that’s been serialized to JSON.

{ "weight": { "unit": "MassUnit.Kilogram", "value": 90.0 } }

Thus, an array of values would be represented as the following.

{ "weights": [ { "unit": "MassUnit.Kilogram", "value": 90.0 }, { "unit": "MassUnit.Kilogram", "value": 90.1 }, { "unit": "MassUnit.Kilogram", "value": 90.2 } ] }

For most applications this will work well. However, in scenarios where you need to send thousands of unit values in a JSON response, the default serialization is verbose. Looking around the for a solution I came across the idea of serializing the quantity and unit of measure as a string. Here is an example of what that’d look like for the same scenarios shown above.

{ "weight": "90.0|kg" }

Here is the array of values simplified in representation.

{ "weights": ["90.0|kg","90.1|kg","90.2|kg"] }

It is easy to see the savings in size of JSON going over the wire with this simplified representation. The example project is available on Github. The code is relatively simple, but I’ll outline a few things that may not be so obvious here…

• The serialization uses the abbreviation for a unit. If a unit has two abbreviations that are the same (think dictionary key violation), then this serializer will fail. • Swagger UI will show the full model for a UnitsNet unit unless you override this with a custom OpenApiSchema. • Swagger OpenApiSchema’s need to be provided per unit type. • This code does not handle collections of mixed unit types.

If you're using UnitsNet and need a lighter serializer pull the project and have a look.

Working with legacy code is often challenging. Lack of tests, odd formatting, old technology and dead code. A good developer will desire to improve the code through refactoring. Refactoring often involves removing or replacing curious bits of code. And this is where problems arise. In the timeless words of G.K. Chesterton:

“In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

It’s good to approach refactoring legacy code with a little humility. It’s likely that the software has been functioning as desired for over a decade. While the cleanliness of the code may be in question, its purpose is sound. Be careful when pulling bits of code that appear to have no execution path. These bits need diligent examination before removal.

One approach for refactoring legacy code is to add tests before refactoring. This will provide confidence that no critical bits got removed from the code. Writing good tests over legacy code involves a level of deep-thinking. By the time you have tests that cover the existing functionality, you’ll also have an idea of how to write a cleaner version. The other benefit is that you’ll be able to reuse your test cases for the new code you write.

If it is impossible to write tests for the code, be rigorous. Take the time needed to understand what’s going on. Read the code carefully, see if the comments appear relevant. Run the application and use the functionality for the bits needing refactored. Use the debugger and breakpoints. Often there is no substitute for debugging and stepping through odd bits of code. Seek out anyone in the organization that may know something about the functionality. Determine the use cases and write your new code with tests that fulfil the use cases. As you discover new use cases, add a representative test case.

There is an immense joy that comes from deleting nasty bits of code. Be sure you know its intent, or there could be a long path of suffering ahead.

Open Source Software (OSS) is pervasive in todays technology stacks. Most companies take advantage of OSS packages at some level in their architecture. Some OSS is packaged with commercial support like Elastic Search. Other OSS projects are small libraries hosted on GitHub and delivered through a package manager like NuGet or NPM. Either way most OSS projects are licensed to allow reuse of code. A book could be written on the assorted flavors of OSS licenses. It would be very dry, perfect bed time reading for lawyers. You can get a taste for this by looking at the licenses listed at the Open Source Initiative. OSS is by its very nature flexible and that's great.

When OSS licensing makes the news, the story usually goes something like this. A large commercial software company has licensed some code as an OSS product. Another commercial company uses that code, and breaks terms of the license agreement. Corporate lawyers start salivating and a legal war breaks out.

I've got a different story to relate. No legal lines are crossed in this story. In general this is more a matter of principals.

A few years back I was using proprietary software from ESRI. A reverse proxy was needed on the server for authenticating client applications. At the time ESRI provided a proxy for .net full framework & Java. Our project was using .net core which was entirely unsupported. ESRI had several users request on Github to create a new proxy for .net core.

The pain and complexity of all this is detailed in the Github threads below.

https://github.com/Esri/resource-proxy/issues/444 https://github.com/Esri/resource-proxy/issues/465 https://github.com/Esri/resource-proxy/issues/464

The TLDR version of the above events. Old proxy code was garbage, ESRI keeps asking for PR's to fix it. No support for .net core is in the ESRI roadmap. Not at this time. Sorry, you are on your own.

Finally, I cracked and wrote my own proxy that would plug into .net core. I put it out on Github and shared the link. The masses rejoice. Or at least 20 other people stuck using ESRI tech with .net core. I wrote a blog post about it at the time.

This is a cautionary tale. Thousands have benefited from the code. In considering the implications of all this it seems that ESRI benefited above all. ESRI is a company making over 1 billion dollars a year. OSS code bailed them out. Their customers continue to pay huge licensing fees. OSS developers plugging the holes in crummy commercial software aren't getting paid.

Github sponsors aims to address the issue of OSS developer compensation. For something like this to work it may require a shift in mindset of the OSS community. Code isn't free, some ones time and energy is being consumed to solve issues. Some are fortunate enough that an employer will pickup time spent on OSS projects, others aren't.

This experience has influenced how I think about OSS code. I'm much less interested in contributing code for commercial products with OSS tentacles.

At my day job, we are revamping our document generation processes. We need to generate complex business documents containing lots of text. One of the requirements was to allow a power user to add in tokens that get replaced with data. In C# we do this with string interpolation. Formatted strings are created using a special syntax {variable}:

string thing1 = "Thing 1";
string thing2 = "Thing 2";
var result = $"{thing1} and {thing2}";

In the above example the text inside the {} is replaced with a variable value, resulting in the text “Thing 1 and Thing 2”.

What we needed to do, was to allow similar functionality but at the level of an object. Each object property of a string data type could contain a templated string. The templated strings would then get filled with values from object properties.

Why? If you are still having a hard time imagining why you'd want/need to do this read the following.

StackOverflow: C# Reflection: replace all occurrence of property with value in text

Now, imagine you store your data in database and you serialize it to JSON. You hydrate a C# object from the JSON and have an object that looks like this.

public class Contract {
   public string Customer {get; set;}
   public DateTime PurchaseDate {get;set;}
   public string Product {get;set;}
   public string Quantity {get;set;}
   public string FinePrint {get;set;}
}

Your contract will always contain a section of fine print. The fine print needs the values from several fields in your object. This would be an easy thing to solve with string interpolation.

$“This contract is between {Customer} and business X. Your purchase of {Quantity} – {Product} shall be delivered 20 years from the data of purchase – sucker”

Let's expand on this hypothetical. Now imagine that your object looks like this.

public class Contract {
    public CustomerModel Customer {get; set;}
    public DateTime PurchaseDate {get;set;}
    public ProductModel Product {get;set;}
    public PriceModel Product {get;set;} 
    public QuantityModel Quantity {get;set;}
    public string FinePrint {get;set;}
    public List<string> Terms {get;set;}
}

Notice that we now have nested objects inside the Contract. Imagine that you don't control the fine print, the terms or anything else. You don't know which of these fields will need a value from any other. The users of your software want to control what data gets included in the fine print.

With ObjectTextTokens we can give the users control of text templating. All that's required is for them to know the object property structure and a simple syntax. For templating, we'll replace text been @ symbols. For object and property access we'll dot into things. @object.property@.

“This contract is between @customer.name@ and business X. Your purchase of @quantity.total@ – @product.name@ shall be delivered right away. The price at time of delivery will not exceed the agreed upon price of @price.total@”

Originally, I solved this issue with tiny little chunk of JavaScript on the client. Later I realized we needed values from several calculated fields server side. It's a pretty easy problem when you don't have to worry about types.

export function tokenator(object: Object) {
        return objectTokenatorIterator(object, object);
}

function objectTokenatorIterator(inputObj: Object, lookupObj: Object) {
        if (!inputObj) {        return;
    }
    Object.keys(inputObj).forEach((k, i) => {
        if (typeof inputObj[k] === "object") {
            return objectTokenatorIterator(inputObj[k], inputObj);
        } else {
            let fieldContents = inputObj[k] as string;
            let matchedTokens = fieldContents.toString().match(/(@\w*@|@\w*\.\w.*@)/g);
            if (matchedTokens && matchedTokens.length > 0) {
                matchedTokens.forEach(t => {
                    let fieldPath = t.replace("@", '').replace("@", "");
                    let content = getNestedObjProperty(fieldPath, lookupObj);
                    fieldContents = fieldContents.replace(t, content);
                });
                inputObj[k] = fieldContents;
            }
        }
    });
    return inputObj;
}

function getNestedObjProperty(path, obj) {
    return path.split('.').reduce(function (prev, curr) {
        return prev ? prev[curr] : null
    }, obj || self)
}

Moving this code over to the server side provided a bit more of a challenge. If this sounds like what you've been looking for, head on over to Github or download the NuGet package and check it out.

For a few years I've been refactoring a database centric application. The codebase is very large. The core application is ASP.NET Web Forms. Originally written in the time when all you needed for an app was third party user controls and a database. Our team has broken four microservices and a new UI project out from the original ASP.NET app. This is were todays story begins.

If you work on a monolithic application with one project/solution file, you can open your project and get to work. You don't have to think about what projects you need to have running for the application to work. Moving to microservices solves some issues and it creates a few. Debugging becomes a pain. How do you best debug microservices? Searching online, I've pulled up some Stack Overflow posts asking this same question.

Debug multiple MicroServices in Visual Studio Local Development Experience when Working with MicroServices

A list of conclusions: – Use Docker – Write unit tests, and integration tests – Log everything then read the log files

If all our projects lived in .net core, Docker might have worked for us. The core application being Web Forms requires a large Windows Server Docker image. The only way to run the app in that container is with a full IIS instance. Debugging requires attaching to the running process manually each time. Not a very slick process. Docker Compose looks great, so once we have everything moved to .net core that might be a better solution.

We write unit tests and integration tests, but sometimes you need to connect some UI and debug some code. Log files are great, but they don't replace the debugger.

If the codebase you are working on is well architected, you may be able to debug micro services in isolation. Our codebase has enough seams that we don't need to run all the microservices at once. Yet, we can rarely do a debug session with only one running.

The obvious answer is to open all the projects in Visual Studio and start debugging. How good are the specs on your dev machine? One Visual Studio instance can take over a gig of ram. The difference between starting one of our projects in Visual Studio vs. donet run was 1.3 gigs vs. 450mb.

For our situation a hybrid approach would be best. Some way to select which code to debug in Visual Studio, and which to run in the background to support debugging.

Our team created a small console application to launch or projects. The first few iterations were cumbersome. With a few tweaks it grew into a decent yet simple solution. If you're interested its out on GitHub. The launcher is targeted to Visual Studio, but the dev environment is configurable. I haven't tried it, but I'm guessing configuring it for Visual Studio code should work as well.

For now this is meeting our needs. As we continue to refactor we'll continue to evaluate how this is working. If you're in a similar boat with debugging microservices, give it a try. Or if you have a better solution leave a comment and clue me in.

Over the last few years, low code software development has risen in prominence. Forbes published an article detailing the virtues of low code systems. Major money has been invested in low code platforms by investors. Large corporations are embracing low code systems. CTO’s and managers are adding “Low Code” to their buzzword vocabulary lists. It's right next to terms like “Agile” and “Digital Enablement”. It's all the rage, but does it live up to the hype?

Let’s start with the basic forensic principal of following the money. Who stands to gain the most from low code development? The purveyors of low code development systems. If you want to see how many different systems are available go, do a Google search. I don’t want to give the charlatans free advertising. How much does a company pushing a low code platform care about your bottom line? Are they invested in the success of your company?

Let’s consider two companies wanting to buy low code systems.

(A) A company with an existing development team that wants faster project turnaround. And development teams are expensive. They see this as a way to lower cost and go fast. Perhaps less developers using low code systems.

(B) A company without internal development capability. They see that their business needs custom software. They don't want to hire developers because of the cost. They want IT and all the other techy people in the organization to build the applications.

Technical sales will market low code systems to the upper echelon of an organization. This is a smart move on the part of sales people, target the uninformed. However, remember this.

The opinion of 10,000 men is of no value if none of them know anything about the subject. -Marcus Aurelius

Low code systems aren’t of much interest to skilled professional developers. Unless of course you're a developer working for a company that sells a low code platform. Let’s take a look at how these systems are marketed.

Low code systems marketing points:

  • Faster development
  • Citizen developers
  • Lower cost
  • Reusability
  • Drag and Drop Coding
  • Extensible

Let’s now look a little closer at the bullet points keeping company (A) and (B) in mind.

Faster development: We live in a time when our applications update continuously. Our applications are changing rapidly and continuously. Many things have enabled software to develop at a rapid pace. I’ll draw your attention to just a few.

  • CI/CD Pipelines
  • Package Managers (NuGet, NPM, PIP) etc.
  • Agile Development Processes
  • Unit/Integration/UI testing
  • Incredible IDE’s

Regardless of the size of your development team, they can take advantage of any of the above technologies to increase their velocity. If you’re development team isn’t moving fast enough, perhaps they need training. Or you need to replace a few developers.

Citizen developers: This term means redirecting productive employees to coding. They will need to spend time learning how to code in a low code environment. They may not be typing code, but they’ll still have to learn many programming concepts. There is a hitch though. Not everyone can code. But a coder can come from anywhere.

‘In the past, I have made no secret of my disdain for Chef Gusteau's famous motto: Anyone can cook. But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere.’ – Anton Ego – Ratatouille

A company is rolling the dice when it comes to citizen developers. You don't know what you're getting. There are a few other issues too. How do you keep your citizen developers from deleting every record in the database? Etc. Eventually a company will realize that citizen development isn't working. At that point they'll start looking for a low code contractor. Yes, buy a low code system, then go look for very small pool of developers who specialize in that system. Scarcity will drive price.

One of the traits that makes a good developer valuable is that they are looking for the best ways to solve a problem. Many solutions aren’t technical, but rather business process. Often the collaboration of a subject matter expert and developer yields great results. The type of thinking required to make good software can also be used to improve other areas of the business. Do citizen developers have these skills? Maybe?

Lower Cost: The immediate questions is lower cost over what timespan? Lower initial cost? Perhaps, but what about the long term cost? Once code gets locked into a proprietary system, you have other things to think about. What if the low code platform provider goes out of business? What is the initial cost? What about yearly service fees? What's the cost to find that small pool of specialized contractors who know how to use the low code system. Remember the old adage. You get what you pay for.

Low code systems have more layers of abstraction. This can result in slower code. Perhaps small inefficiencies, but if you’re a large business slow apps can burn time and time is money.

Reusable Code: I’m not sure why this is even touted as a feature. Pretty much all coding paradigms build on reusability.

Drag and Drop Coding: Drag and drop coding is an interesting feature. Drag and drop code interfaces are being used to teach children the fundamentals of logic. This is a great use case, and as of right now the only valid one.

Extensibility: Not everything will fit into the well-defined box of low code widgets. When you can't build what you need to in your low code environment, you have to write code. This is where things get ugly. You have to write code that conforms to the interfaces of the low code platform. Hopefully you can do what you need to without jumping through dozens of layers. Most low code developers will likely be pretty lost at this point. Get out your checkbook again.

Summary Let’s get back to the two companies that I mentioned earlier. What will happen if company (A) buys a low code environment? It’s likely that any decent developer will leave. Development resumes don’t thrive on low code. The developers left at the company (the ones who made things slow) will continue to go slow with low code. Thus company (A) will have achieved one of its goals, less developers.

If company (B) purchases a low code system they’ll soon find themselves dealing with an uprising in IT. To combat the uprising they’ll be forced into finding contractors to code things for them. They’ll have the disadvantage of locked in technologies. Their dreams will never be realized.

Alternative to Low Code: The alternative to low code systems is to embrace the development process. If you can hire internal developers. If that’s not an option, look to any number of reputable code consulting companies.

A company’s problem solving ability may well define its future value. If you are looking for some stocks to short, look on the web at any low code provider’s testimonials section. That’s a good list to bet against.

Recently I loaded a NuGet package into my Visual Studio project, only to be greeted with the following error.

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first

A little searching yielded a post detailing PowerShell security over on Hanselman's blog. This post got me headed in the right direction to solve the issue.

The NuGet package I tried to install was running an unsigned PowerShell script used to add a few code files into the project. Note that before you do the following you need to trust the author of the script. Understand the implications involved in lowering the PowerShell security model. If you are comfortable that the script being ran is not malicious, proceed.

  • Close Visual Studio
  • Open PowerShell as an administrator
  • "Set-ExecutionPolicy" Unrestricted
  • Open Visual Studio
  • Attempt NuGet install again
  • In PowerShell "Set-ExecutionPolicy" AllSigned

Note: Do not leave the execution policy unrestricted! Also, you will need to leave PowerShell running while you do this, or you'll get the same error.

Learning to code can feel overwhelming. In this post I'll outline a few strategies for success. If you have ever learned to play a musical instrument you would understand the need to practice. Coding is the same way. You will learn best by writing code.

I often need to learn new things. My favorite technique is immersion. Jump in, and work on small goals. Overcoming small obstacles gives me motivation to keep learning. You may find it works well for you too. Let's look at an example of how that works.

To learn a new programming language (Python), I would first find my goal. Something easy enough to understand, yet complex enough to drive your curiosity. Often I choose to write a simple command line game. For example, Hangman. Do not choose complex goals while learning. If you do your mental energy will be wasted thinking through the details of how to reach your goal. Keep it simple.

Once you have your goal, break it down into small parts and focus on one part at a time. Write down a list of what you need to make a Hangman game.

Hangman Component List – Draw a little stick person a part at a time – Word list to randomly choose from – Get user input – Display output to the user – Limit user letter guesses to the number of stick person parts – Winning: user guesses the word, before using all guesses – Loosing: more letter guesses than the number of parts in the stick person

At this point you can pick one little thing to focus on. Perhaps start by figuring out how to code up a list of words. Next figure out how to make a function to randomly return one word. Then look into how to get input from a user.

As you work on each part of your goal, you will be forced to research things. – How do you make a list of words in Python? – How do you randomly select an item from a list in Python? – How do I get user input from the command line?

This method focuses mental energy into accomplishing small wins. During the process of researching jot down things that you are curious about. Go back and read more on those things. If you are learning something entirely new, this will quickly become a large list. Focus on the small easy wins. How do you eat an elephant? One piece at a time!

Focusing on small steps helps to avoid becoming overwhelmed. Researching and reading on topics you don't understand will quickly build your knowledge. Wisdom will come as you continue to write code. Get started, then stick with it!

Enter your email to subscribe to updates.