Using Docker in Development

Over the last few years, our team at work has been busy breaking our monolithic application into microservices. At times in the past, we had entertained the thought of using Docker in development. However, due to the reasons outlined in a previous post, we chose to launch processes as needed and debug project by project. A few months back, I re-evaluated and decided that the bottlenecks to productivity were holding us back. We had reached a point where it was often necessary to have 3 or 4 instances of Visual Studio running to debug through the various microservices. When debugging, we had to manually determine which microservices to run. While this approach kept the memory footprint low, it was tedious. Our local machines had installations of RabbitMQ, Elasticsearch, GrayLog, etc. that needed to be maintained. All these little friction points added up, and so it was time to try Docker again.

Visual Studio has built-in integrations with Docker. Setting up a Visual Studio project or solution with Docker and Docker Compose is relatively simple. However, when you want to pull multiple projects into Docker Compose, it requires a bit more planning. For Docker Compose to work in Visual Studio, all the projects and their Docker files must reside in one solution. This was the first challenge, as our applications are stored per project, per repository. Each application has its own Visual Studio solution (.sln) file, containing multiple sub-projects, and its own Git repository. The simplest way to make this work was to create a new Docker Compose solution that referenced all the application microservice projects.

The Setup

To illustrate, here is what the folder structure of our Git repositories looks like on disk:

C:\Repos – AMS.AxIntegration – AMS.Companies – AMS.Contracts – AMS.Documents – AMS.Shipments

To this structure, we added a new repository: AMS.Docker. Within the AMS.Docker folder, there is one Visual Studio solution that references all the existing Visual Studio projects. Each .NET Core project has its own Docker file. Visual Studio provides integration to add the file and fill it with boilerplate code that will run your application in a container.

Visual Studio adds some flags into the project file that define how Visual Studio interacts with Docker. In debug mode, upon the first run of the application, Docker downloads the container image used to run the application. It uses the Docker file defined in the project. However, be aware that Docker uses settings from launchSettings.json. The ports specified here are where the app launches. Secondly, Fast Stage mode is used. The image is built fully one time, then cached. The project files are mounted into the image so that debugging changes are reflected immediately without a rebuild of the Docker container. Depending on the complexity of the application being containerized, it's possible the application won't work correctly without some modifications to the Docker file.

Here is a typical Docker file for one of our APIs:

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

# Add IP & Network Tools
RUN apt-get update
RUN apt-get install iputils-ping -y
RUN apt-get install net-tools -y
RUN apt-get install dnsutils -y

# Continues Build
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["AMS.Companies.API/AMS.Companies.API.csproj", "AMS.Companies.API/"]
COPY ["AMS.Companies.Domain/AMS.Companies.Domain.csproj", "AMS.Companies.Domain/"]
COPY ["AMS.Companies.Infrastructure/AMS.Companies.Infrastructure.csproj", "AMS.Companies.Infrastructure/"]
RUN dotnet restore "AMS.Companies.API/AMS.Companies.API.csproj"
COPY . .
WORKDIR "/src/AMS.Companies.API"
RUN dotnet build "AMS.Companies.API.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "AMS.Companies.API.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AMS.Companies.API.dll"]

Docker Compose

Docker Compose is a tool that lets you define and manage multi-container applications in a declarative way. It handles launching an entire environment, including all the containers, networking, and third-party dependencies, described in a single configuration file. Before setting up Docker Compose, make sure each Docker file in isolation will start and run each individual application.

Here is the Docker Compose file that ties together the microservices that compose our application:

version: '3.4'
name: ams

services:

  ams-ax-integration:
    image: ams-ax-integration-api
    container_name: ams-ax-integration
    build:
      context: .
      dockerfile: ../AMS.AxIntegration/source/AMS.AxIntegration.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5004:80"
    networks:
      - default

  ams-companies:
    image: ams-companies-api
    container_name: ams-companies
    build:
      context: .
      dockerfile: ../AMS.Companies/source/AMS.Companies.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5005:80"
    networks:
      - default

  ams-contracts:
    image: ams-contracts-api
    container_name: ams-contracts
    build:
      context: .
      dockerfile: ../AMS.Contracts/source/AMS.Contracts.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
        #NOTE: I can't get this env override to work, TODO: Figure it out for now going into app settings...
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5006:80"
    networks:
      - default

  ams-documents:
    image: ams-documents-api
    container_name: ams-documents
    build:
      context: .
      dockerfile: ../AMS.Documents/source/AMS.Documents.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5007:80"
    networks:
      - default
 
  ams-shipments:
    image: ams-shipments-api
    container_name: ams-shipments
    build:
      context: .
      dockerfile: ../AMS.Shipments/source/AMS.Shipments.API/Dockerfile
      # Pull these args in from .env file https://towardsdatascience.com/a-complete-guide-to-using-environment-variables-and-files-with-docker-and-compose-4549c21dc6af
      # You will need to add a .env file locally to pull in the Azure Feed and Nuget PAT variables
      args:
        AZURE_FEED: ${AZURE_FEED}
        NUGET_PAT: ${NUGET_PAT}
    environment:
      - ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT}
      - RabbitMq__Host=rabbitmq://rabbitmq
      - ConnectionStrings__Hangfire=Server=ams-sqlserver;Database=Hangfire;User ID=sa;Password=Sql0nLinux?!;Encrypt=False;
    depends_on:
      - ams-sqlserver
      - ams-elastic
      - ams-rabbitmq
      - ams-seq
    ports:
      - "5008:80"
    networks:
      - default

  ams-sqlserver:
    image: mcr.microsoft.com/mssql/server
    container_name: ams-sqlserver
    restart: always
    ports:
      - "1435:1433"
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=Sql0nLinux?!
    volumes:
      - sql-data:/var/opt/mssql
    networks:
     - default

  ams-elastic:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.4.1
    container_name: ams-elastic
    environment:
      - node.name=es01
      # No cluster here (just 1 ES instance)
      #- cluster.name=es-docker-cluster
      #- discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.transport.ssl.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - default

  ams-rabbitmq:
    image: rabbitmq:3-management
    container_name: ams-rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=guest
      - RABBITMQ_DEFAULT_PASS=guest
    ports:
      - "5672:5672"      # AMQP port
      - "15673:15672"    # Management plugin port
    networks:
     - default

 # Elasticsearch - For Graylog
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    container_name: elasticsearch
    environment:
      - "discovery.type=single-node"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - default
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5
     
  # MongoDB - For Graylog
  mongodb:
    image: mongo:4.4
    container_name: mongodb
    volumes:
      - mongodb_data:/data/db
    networks:
      - default
    healthcheck:
      test: ["CMD", "mongo", "--eval", "db.runCommand('ping')"]  # The health check command
      interval: 30s  # The interval between health checks
      timeout: 10s  # The timeout for each health check
      retries: 5 

   #Graylog
  graylog:
    image: graylog/graylog:4.0
    container_name: graylog
    environment:
      - GRAYLOG_HTTP_EXTERNAL_URI=http://localhost:9000/
      - GRAYLOG_ROOT_PASSWORD=admin
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
        mongodb:
          condition: service_healthy
        elasticsearch:
          condition: service_healthy
    ports:
      - "9000:9000"
      - "12201:12201/udp"
      - "1514:1514"
    volumes:
      - graylog_data:/usr/share/graylog/data
    networks:
      - default

  ams-seq:
    image: datalust/seq:latest
    environment:
    - ACCEPT_EULA=Y
    ports:
        - 5341:80
      
# Ubuntu 20.04 does not have networking setup. 
# The following driver options fix things up so we can access our local network.
networks:
  default:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.default_bridge: "true"
      com.docker.network.bridge.enable_icc: "true"
      com.docker.network.bridge.enable_ip_masquerade: "true"
      com.docker.network.bridge.host_binding_ipv4: "0.0.0.0"
      com.docker.network.bridge.name: "docker0"
      com.docker.network.driver.mtu: "1500"

volumes:
  data01:
    driver: local
  sql-data:
  mongodb_data:
  graylog_data:
  elasticsearch_data:

From within Visual Studio, this Docker Compose project can be booted like any other VS project. With VS handling the mounting of newly compiled code into the containers, things stay relatively responsive. One drawback is that currently there is no support for 'Edit and Continue' within a debug session. This is a small tradeoff when considering the big picture.

Overall, this setup has been much easier to manage as a developer. It simplifies the setup of a new development environment and helps eliminate inconsistencies in local environments. If you haven't tried setting up Docker for development, you might be surprised at how effective it can be.