This is a simple guide to run Dradis + Echo in a self-hosted docker enviornment.
As a quick reminder, Echo is our self-hosted, open-souce, privacy-first LLM integration for Dradis: it’s aware of your entire project context and allows you create custom prompts and commands so you can create your own workflows.
Dradis + Echo image
Clone Dradis:
git clone https://github.com/dradis/dradis-ce.git
Add Echo to your Gemfile.plugins, point it to Echo’s repo:
cd dradis-ce/
echo "gem 'dradis-echo', github: 'dradis/dradis-echo'" \
> Gemfile.plugins
Bundle and build a custom docker image:
bundle
docker build --platform=linux/amd64 \
-t dradis-ce-echo:YYYYMMDD \
-t dradis-ce-echo:latest \
.
Ollama container
We have two options: either run the Dradis and Ollama containers independently, or to use a compose.yaml to bring the full stack up. I’ll do the latter.
This is my compose.yaml:
services:
dradis:
image: dradis-ce-echo:latest
container_name: app
networks:
- internal_backend
- public_facing
ports:
- 3000:80
volumes:
- dradis-storage:/dradis/storage
ollama:
image: ollama/ollama
networks:
- internal_backend
- public_facing
volumes:
- dradis-ollama:/root/.ollama
volumes:
dradis-ollama:
dradis-storage:
networks:
public_facing:
internal_backend:
internal: true # Restricts external access for backend containers
The choices I’ve made:
- We’re routing the local port 3000 to the container’s 80.
- Put Dradis and Ollama data in a volume, so we can update the containers without loosing it.
- Defining both an internal (
internal_backend) and an external (public_facing) networks.
The networking isn’t as important if you’re running the stack in your local machine, but if you want to deploy to the cloud, it becomes so.
Give it a try with:
docker compose up
You’ll see something like:
✔ Container app Created
✔ Container ce-ollama-ollama-1 Created
[...]
ollama-1 | time=2026-02-05T01:26:28.015Z level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[...]
app | * Listening on http://0.0.0.0:3000
app | Use Ctrl-C to stop
We’re cooking.
Pulling the model
In order for Ollama to pull a model, the Ollama container needs to be connected to the internet. Because the data is stored in a docker volume, we’re going to use a new container to download the model with:
docker exec -it ollama-1 ollama pull deepseek-r1:latest
If you see this error:
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/deepseek-r1/manifests/latest": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving
Review your compose.yaml to ensure the ollama container is connected to the public_facing network.
A successful model pull would look like this:
› docker exec -it ce-ollama-ollama-1 ollama pull deepseek-r1:latest
pulling manifest
pulling e6a7edc1a4d7: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 5.2 GB
pulling c5ad996bda6e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 556 B
pulling 6e4c38e1172f: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB
pulling ed8474dc73db: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 179 B
pulling f64cd5418e4b: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
After you pull your models, update the configuration and comment the - public_facing line in ollama’s service definition:
ollama:
image: ollama/ollama
networks:
- internal_backend
# - public_facing
volumes:
- dradis-ollama:/root/.ollama
Stop the stack and run docker compose up again.
Configuring
Point your browser to http://127.0.0.1:3000/ and go through the initial setup. On the last stage, choose “No, I’m a new user”, to get Dradis populated with some sample content:
Then sign in, head over to Tools > Echo and configure it:
For the address we’re pointing it to the docker internal hostname of the Ollama container, the default port is fine.
And for the model, make sure it matches the one you downloaded a few steps above.
Running Echo
To give it a try, navigate to Issues, pick one, and click on the Echo tab, then choose one of the default prompts and see the engine in action (wait time will depend on your machine specs, and the model you’ve chosen):


