Skip to main content

Workshop: Supercharging Ghidra Reverse Engineering with Local LLMs at Countermeasure 2025

· 3 min read
clearseclabs
Cyber Security Research & Training

"Reverse engineering workflows are evolving, and local LLMs are reshaping how we analyze binaries, automate tooling, and preserve privacy." – CSL

I’m excited to announce that I’ll be leading a hands-on workshop at Countermeasure 2025:

Supercharging Ghidra: Build Your Own Private Local LLM RE Stack with GhidraMCP, Ollama, and OpenWebUI
📅 Presented by John McIntosh

🔗 Workshop details on the conference site


Workshop Abstract

In this 90-minute session, participants will learn how to build a modular, private RE stack using GhidraMCP, pyghidra-mcp, Ollama, and OpenWebUI. We’ll walk through setting up local LLMs, integrating them with Ghidra, and customizing workflows to suit your threat model and tooling preferences.

Whether you're reverse engineering malware, firmware, or proprietary binaries, this workshop will equip you with a reproducible, offline-first workflow that enhances analysis while keeping sensitive data local.

What You’ll Learn

Part 1: Foundations

  • Why local LLMs matter for RE: privacy, reproducibility, and control
  • Overview of GhidraMCP, Ollama, and OpenWebUI
  • Hardware and OS considerations

Part 2: Stack Setup

  • Installing Ollama and running models locally
  • Configuring OpenWebUI for prompt management
  • Integrating GhidraMCP with Ghidra and local LLMs
  • Testing your MCP server

Part 3: Workflow Deep Dive

  • Real-world RE tasks enhanced by LLMs (decompilation, annotation, automation)
  • Prompt engineering for binary analysis
  • Extending the stack with custom models and plugins

Hands-on exercises include:

  • GhidraMCP GUI: Rename functions, summarize behavior, and query binaries directly in Ghidra.
  • pyghidra-mcp CLI: Analyze entire projects, run cross-binary queries, and detect reused code or suspicious patterns.

Part 4: Wrap-Up & Q&A

  • Troubleshooting tips
  • Sharing modular configs and prompt libraries
  • Open discussion on future directions

System Requirements

  • A machine capable of running at least an 8B model (Qwen3, Llama, etc.)
  • Modern GPU + 16GB+ RAM recommended
  • Quantized models for laptops or mid-tier GPUs (RTX 3060, Apple M-series)

If your hardware doesn’t meet these specs, you can still follow along using free-tier remote models. Setup instructions will be provided for both local and remote-friendly options.

We’ll primarily use Docker to run OpenWebUI and Ollama, but non-Docker setups will also be supported.


Join Me at Countermeasure 2025

If you’re ready to modernize your reverse engineering workflows with local, private LLMs, this workshop is for you. You’ll leave with a working RE stack and the confidence to extend it with your own prompts, models, and plugins.

👉 Register for the workshop here


Keep Building Beyond the Workshop

If you want to go deeper into agentic reverse engineering workflows, check out my training:
🔗 Building Agentic RE

#ReverseEngineering #LLMs #Ghidra #Agentic #Automation #Ringzer0 #Countermeasure