Technology//7 min read

How to Run Accurate A/B Tests Without Cookies Using First-Party Events

By Sam

Why cookie-free A/B testing is now the default in many stacks

If you’re trying to measure A/B impact without third-party cookies (and often without any cookies at all), the real challenge isn’t “tracking less.” It’s designing experiments that still answer the business questions you care about: lift, returning behavior, and conversion impact—without relying on persistent IDs.

Cookie-based approaches typically depend on identifying a user across visits and assigning them to a stable variant. In cookie-free setups, you can still run accurate tests, but you’ll lean more on:

  • Randomization that doesn’t require long-lived identifiers
  • First-party events that describe what happened (views, signups, purchases)
  • Experiment designs that are robust to partial observability (some users will be “new every time”)

Privacy-friendly analytics tools such as plausible.io fit naturally into this model because they focus on aggregated measurement and first-party event capture rather than persistent cross-visit profiling.

Start with the measurement question, not the variant mechanics

Before you pick a randomization method, define the decision you want to make. Cookie-free testing works best when your primary outcome is a conversion event you can define clearly as a first-party action, such as:

  • Account created
  • Checkout completed
  • Demo request submitted
  • Activation milestone reached (for example, “created first project”)

Then decide what “lift” means:

  • Absolute lift: conversion rate difference (B − A)
  • Relative lift: (B − A) / A
  • Incremental conversions: extra conversions attributable to B given traffic

When you remove cookies, the core KPI math stays the same. What changes is how confidently you can connect multiple sessions to one person—and therefore which designs you should prefer.

Randomization strategies that don’t depend on cookies

1) Session-level randomization (simple and often enough)

The cleanest cookie-free option is assigning a variant per session (or per page load) using a random number at runtime. You then send a first-party event that includes the assigned variant, for example:

  • pageview with property variant=A or variant=B
  • signup with property variant=A or variant=B

This approach is ideal when the change you’re testing affects immediate behavior in the same session (CTA copy, pricing layout, landing page structure). It can be less ideal when the expected effect requires multiple visits, because the same person may see different variants on different days.

2) Server-side assignment using first-party context

If you control the server-rendered response (or edge layer), you can assign variants deterministically without a cookie by using first-party context available at request time. Common options include:

  • Logged-in user ID: stable and accurate, but only applies post-login
  • Order/account IDs: stable for conversion events
  • Short-lived, first-party session identifiers: stored in memory or passed through URLs when appropriate

The key is to avoid turning the experiment framework into a tracking system. If you use an identifier, keep it first-party, purpose-limited, and ephemeral unless the product experience truly requires persistence.

3) Geo or time-based split (use cautiously)

Splitting by geography (region A sees control, region B sees variant) or by time (week 1 control, week 2 variant) can be cookie-free, but it’s more vulnerable to confounders: seasonality, campaign changes, and regional audience differences. It can still work when:

  • Traffic is very large
  • You can control external changes tightly
  • You need an operationally simple rollout model

Measuring lift with first-party events

In cookie-based testing, lift is often computed at the user level. In cookie-free setups, lift is commonly computed at the session or exposure level. That’s not automatically “worse”—it just answers a slightly different question.

A practical event model looks like this:

  • Exposure event: when a user actually sees the treatment (not merely lands on the site)
  • Conversion event: the business action you care about
  • Variant label: attached to both events

Two common pitfalls are (1) counting visitors who never saw the tested element, and (2) labeling variant only on pageviews but not on downstream conversion events. If your conversion happens on another page, ensure the variant context is carried forward server-side or re-derived deterministically.

Returning users without cookies: what you can and can’t claim

“Returning users” is the hardest metric to preserve without persistent identifiers. Without cookies, you generally can’t guarantee that “this is the same person as last week,” especially across browsers or devices.

That doesn’t mean you can’t test impacts that unfold over time. It means you should be explicit about the method you use and the claim you’re making:

  • Logged-in cohorts: If your product has authentication, you can measure return and retention accurately post-login using the account identifier. This is often the most defensible path for lifecycle experiments.
  • Aggregate returning approximations: Some analytics products can estimate repeat traffic patterns without storing personal data by using aggregated approaches. Treat these as directional signals rather than strict user-level truth.
  • Proxy metrics: Use time-to-convert, multi-step funnel completion, or repeat conversions (for example, “second purchase”) tied to an order/account record.

When you report results, separate “session conversion rate lift” from “retention lift among logged-in users.” Mixing these blurs what the data can actually support.

Conversion impact when users can switch variants

Session-level randomization introduces a new issue: a person might be exposed to both A and B across different visits. This can dilute measured lift, especially when the treatment’s effect depends on repeated exposure.

Ways to handle this without adding cookies:

  • Focus on immediate conversions: Choose tests where the effect should appear within the same session.
  • Use authenticated assignment: Once a user logs in, assign them consistently (A or B) based on account ID and keep it stable for the experiment duration.
  • Analyze by first exposure within a window: If you can store first-exposure server-side for logged-in users, compare outcomes from that point forward.

For anonymous users, be careful with strong claims about long-run behavior changes. It’s better to say “B increased conversion per exposure” than “B increased conversion per user over 30 days” unless you have a stable identifier.

Practical experiment hygiene with first-party events

Define a clean event taxonomy

Use consistent naming and properties so you can segment by variant, device type, landing page, and channel without re-instrumenting every test. This is similar in spirit to keeping operational data clean; a good reference point is a checklist mindset like in a field-level CRM sync checklist, but applied to analytics events.

Guard against instrumentation drift

Cookie-free analytics is less forgiving of sloppy event capture because you can’t “patch” missing context by stitching sessions later. Verify:

  • Variant label is present on exposure and conversion events
  • Events fire only once (no double-counting)
  • Bot filtering is on (otherwise lift can be noise-driven)

Account for channel and campaign mix

If traffic sources shift during the test, your lift estimate can shift too. Use UTM campaign segmentation and channel grouping to ensure the variant split is balanced within major channels, not just overall.

How Plausible fits into cookie-free A/B test measurement

Plausible Analytics is designed around privacy-friendly, cookie-free measurement and first-party event tracking. For A/B tests, the most practical pattern is to treat the variant as a first-party event property and analyze conversions and funnels at an aggregate level. This aligns well with organizations that want credible experiment readouts without building a persistent identity graph.

If you need user-level retention measurement, pair cookie-free web analytics with your first-party product data (accounts, subscriptions, orders). That split—aggregate web measurement plus first-party system-of-record outcomes—often produces clearer, more auditable experiment conclusions than trying to force user stitching where it doesn’t belong.

Frequently Asked Questions

How can Plausible be used for A/B testing without cookies?

With Plausible, you can send first-party events that include a variant label (A/B) and then compare conversion events and funnel completion by variant at an aggregate level—without using cookies.

Can Plausible measure returning users accurately without identifiers?

Plausible is privacy-friendly and avoids persistent identifiers, so “returning users” should be treated as an aggregate signal. For accurate retention, rely on logged-in account data alongside Plausible’s aggregate web metrics.

What’s the best randomization approach if I want to stay cookie-free with Plausible?

Session-level randomization is usually the simplest: assign a variant on page load and include it on exposure and conversion events you send to Plausible. If you have authentication, deterministic assignment by account ID is stronger for longer-running effects.

How do I avoid biased lift results when using Plausible events?

Make sure only exposed sessions are counted (track an explicit exposure event), carry the variant label through to the conversion event, and segment by channel/UTMs so a mid-test traffic mix shift doesn’t masquerade as lift.

What conversion metrics work best in cookie-free experiments measured with Plausible?

Metrics tied to clear first-party actions—signup submitted, checkout completed, activation milestone—work best. They can be recorded as goals or custom events in Plausible and analyzed consistently across variants.

Related Analysis