This is impressive work. The validation + correlation layer is what most recon tools lack - reducing false positives is a huge win. Curious how you are handling edge cases in JS endpoint extraction and dynamic parameters.
Recon Agents is a 14-agent distributed cybersecurity platform for reconnaissance, evidence-driven validation, attack-path correlation, and LLM-powered final intelligence. Built to maximize signal and minimize noise.
Peer Validation
Security Analyst Logs
Expert-style feedback represented as structured review snippets.
Solid architecture. The phased execution with parallel recon and sequential validation is very well thought out. The 86% accuracy metric after pruning is particularly interesting - benchmarking details would be great to review.
The dual-layer design (LangGraph + distributed agents) is a strong approach. Separation of orchestration and execution makes it scalable. Evidence-driven findings are a strong quality marker.
If the validation engine really cuts down noise, this could be a game changer. Most tools flood with low-quality findings - signal over noise is everything.
Using LLM for classification and prioritization instead of raw scanning is the right direction. AI is strongest for decision support and intelligence summarization.
This feels like a product, not just a script. CLI UX, structured output, distributed mode, and validation logic make it practical for real-world offensive workflows.
Strong project depth across security engineering, distributed systems, and applied AI. This profile maps directly to advanced offensive security and AppSec R&D roles.
Flow
Flow - How It Works
Input to intelligence through a deterministic multi-stage pipeline.
Flow Output
Step 01: Input
User provides target domain and selects scan mode (Recon / Attack).
Execution Model
Architecture - 14 Agent Topology
14-agent topology with parallel discovery and sequential confidence hardening.
Agent Output
Agent 01: Subdomain Enumeration
Discovers subdomains and expands the external attack surface map.
Core Value
Feature Highlights
Product-first capabilities designed for offensive security teams.
Distributed Execution
Scale scanning with remote workers while preserving centralized orchestration.
Evidence-Based Validation
Confirm findings through validation gates before reporting to reduce false positives.
Correlation Engine
Connect weak signals into realistic attack paths with confidence-weighted context.
LLM Intelligence
Leverage OpenAI or Ollama models for prioritization and executive-ready synthesis.
Runtime Snapshot
Live Terminal Preview
Simulated operation feed showing recon to validated intelligence flow.
Platform
Tech Stack
Lightweight, scalable, and integration-ready security stack.
Profile
About Me
Yaswanth
AI Security Researcher | Agentic & MCP Security | Penetration Tester | Ethical Hacking & Red Teaming
Focused on building production-grade offensive security platforms that prioritize evidence, confidence scoring, and practical remediation intelligence.
Call To Action
Build, Break, Secure
Explore the project, run the demo flow, or connect for offensive security and AI security roles.
Email: yaswanthvisa@gmail.com