Debugging Microservices with Docker Compose and HAProxy
When working on a system recently, we had a problem where depending on the area we were working in, we had to remember to run all the different components necessary. Running one service meant we may also have needed to run our authentication service, a MongoDb instance, and another one or two services.
That’s a lot to remember, and a lot that I don’t really care about - I only want to work on one of those.
To get around this problem I looked into making use of Docker Compose and HAProxy. The idea being that we spin up one Docker Compose which includes all components, the HAProxy container can then be used to route requests to a local debug instance if it’s running, or fall back to a container if it isn’t.
This might not fit everyone’s solution, but given all our services were containerized already, it seemed like a good fit for us.That said, if you know of another (better?) way of getting around this problem, please let me know!
As always, the sample can be found on Github here.
For this I’ve already set up a basic Website and API in .NET Core - the implementation of these doesn’t matter for this post.
What does matter is that each project has a Dockerfile exposing them on port 80
, which looks like this:
FROM microsoft/aspnetcore:2.0.0
WORKDIR /app
COPY ./publish .
EXPOSE 80
ENTRYPOINT ["dotnet", "MicroWeb.dll"]
For the first step, I’m creating a Docker Compose file that will build our two components (if you have something published, you can pull the image instead), pass in a base address for the API as an environment variable
, and expose the previously mentioned port 80
.
version: '3.3'
services:
web:
build: ./MicroWeb
expose:
- "80"
environment:
- baseAddress=http://docker.for.mac.localhost:8020/
api:
build: ./MicroApi
expose:
- "80"
In this example you can see the base address for the API is being passed in as docker.for.mac.localhost:9201
- obviously this is Mac specific. Annoyingly, at the time of writing, Docker provides different DNS names for both Windows and Mac, so use docker.for.win.localhost
in it’s place if you’re on Windows.
This is frustrating because it means you have to maintain two copies of both Docker Compose and HAProxy config if you want this to be available on multiple environments. From what I’ve found, a similar way hasn’t been provided for Linux - though I could be wrong
Now we need to add HAProxy - we can pull the image for this one. We’ll link the API and Web containers for use in the config, and we’ll bind to some ports to use - I’m using 9200
and 9201
.
I’m going to use 9200 for the Web
tier of my application, and 9201 for the API
layer.
The last bit is to add the config file as a volume - we’ll create this next. Our Docker Compose now looks like this:
version: '3.3'
services:
haproxy:
image: haproxy
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
links:
- web
- api
ports:
- "9200:9200"
- "9201:9201"
web:
build: ./MicroWeb
expose:
- "80"
environment:
- baseAddress=http://docker.for.mac.localhost:9201/
api:
build: ./MicroApi
expose:
- "80"
I’ve changed the API base address to match our plans for the HAProxy config, and as you can see we’re looking for a config file in the same folder as the compose file, so let’s create that and get things set up.
The sections of the config we’re interested in for this post are the frontend and backend sections.
We want a frontend for each layer of the application, one for the web, one for the api, one for anything else you want etc., and a backend for each individual service.
The frontend will define rules for routing to a backend based on the url. So if I had two APIs - values and users - for example, I could have addresses of /values
routed to the values service, and /users
to the users service etc.
So for our example, I’ve set up two frontends, each binding to the port mentioned above, and providing url based routing to our services.
acl
defines the rule, and giving it a name. I’m using path_beg
to base the rules on the URL.
use-backend
defines which backend to use for the named rules described above.
frontend web_frontend
bind *:9200
acl has_test_uri path_beg -i /webtest
use_backend web_backend if has_test_uri
frontend api_frontend
bind *:9201
acl has_test_uri path_beg -i /apitest
use_backend api_backend if has_test_uri
We also have two backends, one for the Web service, and one for the API service. These tell the proxy where to look for a server
that can handle the frontend requests.
The first one will be our local dev instance (using the previously mentioned Docker DNS).
The second will be the container for that service along with backup
to indicate that this should be used as a fallback if local dev is unavailable.
backend web_backend
server server-local docker.for.mac.localhost:5020 check
server server-docker web:80 check backup
backend api_backend
server server-local docker.for.mac.localhost:8020 check
server server-docker api:80 check backup
When you add more components to your application, you simply add more backends for them, and add them in to the appropriate frontend - or a new frontend section, if you’re adding a new layer such as wanting MongoDb to run this way as well.
Now we have the Docker Compose and HAProxy config all set up, we can give it a try.
Our Dockerfiles are set to copy the publish
directory to the container and use that to run, so in both the web and api folders, run dotnet publish -c Release -o publish
to publish the code (or use the tools in Visual Studio).
First, navigate to the folder containing the compose file and run docker-compose build
to build the containers with the latest published code. Then we simply run the command docker-compose up
. That will spin up the containers.
Now if we navigate to localhost:9200/webtest/home
we can see our page loads including the response to our API. Success!
The point of all this is to easily switch what’s being used, so if you start debugging the API application on the set port 8020
and put a breakpoint in the API controller, you can refresh the page and see the breakpoint get hit.
Obviously the same applies to debugging the Web application on port 5020
.
My current problem with this is that switching back from debug to container, the first request will fail before the fallback takes place, which isn’t ideal. I’m planning on looking into using heartbeat or something to try and work around this in the future.
While this is a basic example, you can see how this can expand to cover more services and prove useful, so hopefully it’s of use!