Improve Issue write-ups with a flexible prompting engine that uses Liquid Dynamic Content to provide full context about your project and findings, so you can craft relevant prompts to get the most accurate answers. Your data always stays local to uphold data sovereignty.
When designing Echo, we knew we couldn’t sacrifice data sovereignty. For teams working with sensitive findings or those in air-gapped environments, sending data to external LLM providers isn’t an option. That’s why we built around locally-hosted Ollama instances that put you in control. You bring your own LLM, and we provide the context-aware prompting engine through Liquid templating. Your data stays on your infrastructure, and you get responses informed by your full project context, not generic outputs from copy-pasting into external chat interfaces. No external API calls, no 3rd party subscription fees, no data leaving your network.
This is still in beta, but for the keen tinkerers and early adopters, you can install this early release for a taste of what’s to come. Although it should work, we recommend using it with a non-Product instance of Dradis until the official release.
Prerequisites
The add-on requires Dradis CE > 4.0, or Dradis Pro.
It uses a local Ollama installation to connect Dradis to your preferred LLMs.
Setup
Run Ollama and pull one of the models:
ollama serve
ollama run deepseek-r1:latest
Note, the smaller the model, the faster the responses, but potentially less accurate. Larger models can be slower but should produce higher-quality results.
The RAM requirement is directly tied to the model’s parameter count. A reliable rule of thumb is:
- 3B-7B Models: At least 8GB of RAM.
- 13B-14B Models: At least 16GB of RAM.
- 30B-34B Models: At least 32GB of RAM.
- 70B Models: At least 64GB of RAM.
Always aim for the recommended amount rather than the minimum for a smoother experience. More about resource requirements.
If you are using the CE edition, you’ll need to run Redis.
redis-server
And you’ll also need to update this line to
adapter: redis
Install
Add this to your Gemfile.plugins:
gem 'dradis-echo', github: 'dradis/dradis-echo'
And
bundle install
Lastly, restart your Dradis server, and you should see Echo available in your instance.
Configure
Configure Echo with the Ollama server address and selected model:
- CE: Settings → Configure Integrations
- Pro: Tools → Tool Manager → Configure (in the Echo section)
Usage
Navigate to an Issue and click the Echo tab. From there you’ll be able to Summarize or Reword your Issue content, or generate a cheeky Haiku.
Check out GitHub - dradis/dradis-echo: Dradis Echo is an AI copilot for Dradis Framework for more.