Hey there. It’s been a very busy week as I wrap up the previous LHE we did in Bali and get ready to kick off a new LHE in Portugal next week. Time for H1 to crown another MVH already!

I’m also hearing rumors of a Bugcrowd Bug Bash that just wrapped up, but I haven’t heard who won yet. If you know, feel free to ping me so I can spotlight them.

Outside of work, it’s been an insane week in the AI world: Opus 4.7 dropping, classifier refusals, system leak prompts... it’s hard to keep up. My automation system has dried up quite a bit as I work around all the changes. One tip: if you’re getting hit with classifier refusals in Opus, fill out the cyber use form. You should hear back within hours if you’re approved. If you don’t hear back, you were likely silently rejected, so it may be worth submitting again:
https://claude.com/form/cyber-use-case

I’m still testing Opus 4.7 for bug bounty, and the results aren’t conclusive yet. XBOW’s article featured further down summarizes things well, and it matches what I’m seeing. In general, the cost per turn is higher, but the number of turns required per run is much smaller, so it ends up being a wash financially, with a net positive in efficiency and time spent.

That said, I haven’t noticed a meaningful improvement in findings output. In fact, my system has slowed to a pretty big halt compared to the volume of findings I was uncovering four weeks ago.

I’m thinking of rebuilding my system so it’s not so reliant on Anthropic. Does anybody have experience building a custom harness with OpenCode? If so, get in touch. I’d love to swap ideas.

Anyway, let’s dive in.

I’m available for 1:1 calls if you want to chat about bug bounty, career growth, community building, or anything else you think I can help with. You can book time with me here.

Nahuel shares that he took home both MVH and the Erradicator award at HackerOnes H1-361 Live Hacking Event. The post is a win announcement and thank-you, with no technical details.

valent1nee announces winning Most Valuable Hacker (MVH) at Google LHE Seoul 2026 and posts a photo of the award. No vulnerability details are included.

Zero Days, Zero Truth [Spotlight]

by Brute Logic (Rodolfo Assis)

Brute Logic audits a set of claims around autonomous AI vulnerability discovery, cross-checking specific examples against primary sources. While some cited bugs are real, the piece argues the narrative overstates autonomy and impact, and underscores the need for careful verification and disciplined disclosure/triage when AI is in the loop.

Have something you want to Spotlight? Tell me.

Vercel says an attacker pivoted from a compromised AI-platform customer into employee access, then enumerated environment variables after Google Workspace compromise. Vercel reports mitigations are in place, visibility in the dashboard has been improved, and external incident responders are engaged.

YesWeHack announces an integration with Caido that lets researchers browse programs and review scope directly inside the Caido workspace. The goal is tighter workflow between target selection and testing without switching tools.

YesWeHack recaps Q1 Bucket List completions, calling out researchers who cleared specific challenges and noting swag rewards. The post also highlights that several items remain open for hunters to tackle.

Kara Sprague outlines HackerOnes current platform direction and product focus in an RSA Conference talk. The discussion centers on program operations and workflow improvements rather than vulnerability research.

Claude Announces Opus 4.7 [𝕏 Tweet]

by Claude (@claudeai)

Anthropic announces Claude Opus 4.7, positioning it as stronger on long tasks, instruction adherence, and self-checking. Its a product update with no security-specific technical detail.

Leaked Claude Opus 4.7 System Prompt Shared [𝕏 Tweet]

by Pliny the Liberator (@elder_plinius)

Pliny shares a large alleged Claude Opus 4.7 system prompt dump, including behavioral constraints and tool-use guidance. Its primarily relevant for LLM security research, prompt-injection testing, and jailbreak analysis.

Did I miss an important update? Tell me.

HuntrBoard is a Burp Suite extension (Montoya API) that adds a dedicated tab for target tracking and persistent notes to support day-to-day hunting. Its a workflow/productivity add-on rather than a security testing engine.

claude-bug-bounty is a terminal-first framework that wires Claude Code into a structured bug bounty workflow: recon pipelines, playbooks across common vuln classes, and report generation templates for major platforms. It also includes rules/wordlists and optional modules (e.g., MCP integration) aimed at repeatable, semi-autonomous runs.

CloudRip is a Python CLI that brute-forces subdomains, resolves A/AAAA records, and filters out Cloudflare ranges to identify likely origin IPs. It supports threading, IPv6, custom wordlists, and exporting results in multiple formats.

This overview covers BugTraceAI Apex, a 26B MoE model pitched as an offensive-security-tuned LLM with quantized weights intended for local deployment on consumer hardware. The post focuses on packaging/runtime details and self-reported capability claims, and links out to upstream artifacts for validation.

Have a favorite tool? Tell me.

XBOW shares early observations from running Opus 4.7 in offensive workflows, focusing on how model behavior changes affect agent prompting and operational patterns. It frames LLMs as strong for breadth and iteration, but still brittle on depth and verification without tight execution scaffolding and benchmarking.

Hazem shares a write-up and tooling for a multi-step chain to RCE on Tomcat behind Cloudflare, pivoting through an exposed JMX proxy and AccessLogValve/docBase manipulation. The thread links to supporting material including a tool and a Nuclei template.

This write-up details CVE-2026-33555, a cross-protocol request smuggling issue in HAProxys HTTP/3-to-HTTP/1 translation. A zero-length QUIC STREAM frame with FIN can desync Content-Length handling, causing the backend to consume bytes from a subsequent request as the missing body; patched versions are listed with mitigation guidance.

From XSS to Police-Only Portal Exposure [📓 Blog]

by ArgosDNS.io

A bug bounty investigation that started as an XSS led to a forgotten, publicly reachable third-party police interface exposing sensitive employee/company data. The write-up walks through clue-driven discovery (comments, directory fuzzing, parameter patterning) and shows how small anomalies can reveal high-impact legacy surfaces.

This write-up shows how a client-side path traversal in a frontend URL builder enabled arbitrary API method/path construction, then escalated into account takeover by changing a victims email and resetting their password. It also describes a follow-on bypass of SMS-based 2FA via JavaScript prototype-chain abuse, alongside mitigation guidance around server-side validation and hardening auth flows.

S1r1u5_ reports using Claude Opus over about a week to iteratively develop a V8 exploit against Discord and reach a shell, citing roughly 2.3B tokens in usage (~$2,283). The post is a data point on cost and iteration loops for AI-assisted exploit development and links to a longer write-up.

Kartikey describes finding an access-control bug on a file download endpoint and scaling it into 21 more by mining client-side JavaScript for shared identifiers and endpoint patterns. The workflow pairs JS crawling (Katana) with simple keyword-based discovery to quickly enumerate similarly broken routes and batch reporting.

Did I miss something? Tell me.

YesWeHacks guide surveys OS command injection (CWE-78) with coverage of classic, blind, time-based, and out-of-band detection patterns. It also distinguishes command injection from argument injection (CWE-88) and includes code-level examples and mitigation guidance focused on safe APIs and escaping.

Intigriti highlights a guide on GraphQL enumeration (queries/mutations) and common bug bounty failure modes, including authz gaps and batching/abuse scenarios. Its aimed at building a repeatable mapping workflow before attempting exploitation.

This thread summarizes a HackerNotes roundup touching on AI exfiltration primitives (binary oracles + HTML injection), using Claude Code for SDK review/path traversal hunting, and common OAuth scope expansion pitfalls. It also includes practical notes on communicating findings effectively at live events.

Did I miss something? Tell me.

Is AI Killing Bug Bounty? [🎥 Video]

by Ben Sadeghipour (@NahamSec)

NahamSec discusses how LLMs and agents are changing both hunter workflows and program triage, including the rise of low-signal AI-generated submissions. The video focuses on where AI helps (recon and iteration) and where it hurts (validation gaps and report noise), with practical guidance on keeping output high-signal.

I Broke a Chatbot Using This Trick [🎥 Video]

by Medusa (@medusa_0xf)

Medusa demonstrates exploiting SSTI in the OWASP Juice Shop chatbot, showing how templating context and unsafe rendering turn chat input into server-side execution. The walkthrough focuses on payload construction and why chatbot-style features can introduce classic injection classes.

This episode recaps a Korea trip and Live Hacking Event lessons, with emphasis on workflow ergonomics like tmux usage, WebSocket debugging, and where Claude Code fits into day-to-day productivity. Its experience and process-oriented rather than a deep technical exploit breakdown.

Did I miss something? Tell me.

Confused Deputy Risk in Multi-Agent Token Delegation [𝕏 Tweet]

by Critical Thinking Podcast

This tweet flags a common multi-agent failure mode: passing the same user token across delegated tools can create confused-deputy paths when downstream services retain overly broad scopes. It points to RFC 8693 token exchange as a mitigation and suggests validating effective scopes at each hop.

Authority Framing as a System Prompt Exfil Technique [𝕏 Tweet]

by Pliny the Liberator (@elder_plinius)

A thread demonstratingauthority framingposing as an auditor/adminto pressure a model into revealing system instructions. Its a practical example of prompt-injection tactics that show up in real LLM app assessments.

A quick reminder that exposed phpinfo() output sometimes leaks secrets in plain sight, including database usernames and passwords. The tip suggests searching common credential keys to escalate what might look like a low-impact information disclosure.

Het Mehta argues that postmortems and incident reports provide higher-signal learning than generic tutorials because they repeatedly surface the same root causes and operational failures. The tweet points to well-known sources (cloud advisories, P0-style analyses, transparency reports) as a study backlog.

Did I miss something? Tell me.

Did you like this week's drop?

Please share feedback.

Login or Subscribe to participate

Because Disclosure Matters: This newsletter was produced with the assistance of AI. While I strive for accuracy and quality, not all content has been independently vetted or fact-checked. Please allow for a reasonable margin of error. The views expressed are my own and do not reflect those of my employer.

Keep Reading