Understanding Adverse Event Rates: Percentages and Relative Risk in Clinical Trials

Understanding Adverse Event Rates: Percentages and Relative Risk in Clinical Trials Feb, 7 2026

Adverse Event Rate Calculator

Group A (New Drug)
Group B (Control)
Results

When a new drug hits the market, everyone wants to know: is it safe? But safety isn’t just about whether someone had a side effect. It’s about how often it happens, and under what conditions. Two people might both get a headache after taking the same medicine - but if one took it for 3 days and the other for 3 years, the real risk is totally different. That’s why simple percentages can be misleading. The FDA and top drug companies now rely on more precise methods to measure risk, and understanding these tools helps you make sense of what’s really going on in clinical trials.

Why Percentages Alone Don’t Tell the Whole Story

You’ve probably seen headlines like: "15% of patients in Trial X had nausea." That sounds clear - until you realize half those patients were on the drug for 2 weeks, and the other half stayed on it for 2 years. The percentage doesn’t change, but the actual risk does. This is called the incidence rate (IR): total number of people who had the event divided by total number exposed. It’s easy to calculate, and for decades, it was the standard. But it ignores time. If you only look at IR, you’re essentially saying a 3-day exposure equals a 3-year exposure. That’s like saying a 10-minute drive and a 10-hour drive have the same chance of a flat tire.

A 2010 analysis by Dr. Cate Andrade found that using IR alone could underestimate true event rates by 18% to 37% in trials where treatment lengths varied. Imagine a drug for chronic pain. People on placebo stop after 6 months. People on the new drug stay on it for years. If 5% of the placebo group had liver issues over 6 months, but 5% of the drug group had them over 5 years, IR would say both groups have the same risk. But in reality, the drug group had five times more exposure - meaning the event was far less frequent per unit of time. IR hides that.

The Shift to Patient-Years: EIR Explained

To fix this, statisticians started using event incidence rate adjusted by patient-years (EIR). Instead of counting people, you count time. One patient-year means one person was exposed for one full year. If 10 people took the drug for 6 months each, that’s 5 patient-years (10 × 0.5). If 3 of them had a rash, the EIR is 3 events per 5 patient-years - or 60 events per 100 patient-years (60/100PY).

This is now standard in tools like JMP Clinical. It answers a better question: "How often does this event happen per year of exposure?" For recurrent events - like diarrhea or dizziness - EIR gives a clearer picture than IR. A drug might cause diarrhea in 20% of people, but if it happens 4 times a year in each person, EIR shows you’re looking at 80 events per 100 patient-years. That’s a very different safety profile than if it happened once and went away.

But EIR has its own blind spot. It counts events, not people. If one person has 10 episodes of vomiting, they count as 10 events. That inflates the rate even if only one person is affected. It’s useful for tracking frequency, but not for knowing how many people are impacted.

The FDA’s New Standard: Exposure-Adjusted Incidence Rate (EAIR)

In 2023, the FDA requested EAIR in a supplemental biologics license application - a clear signal that the industry must move beyond IR and EIR. EAIR doesn’t just count events or people. It accounts for both - and how long each person was actually exposed.

EAIR calculates the number of events divided by total exposure time, but with a twist: it handles interruptions. If a patient stops the drug for 2 weeks due to an unrelated surgery, that time isn’t counted. If they restart, only the time they were actually taking the drug counts. It also adjusts for recurrence - a patient who has 5 episodes over 18 months is treated differently than someone who has one episode and stops.

MSD’s safety team found that switching to EAIR uncovered previously hidden safety signals in 12% of their drug programs - especially in long-term therapies where exposure times varied wildly. A drug for rheumatoid arthritis might show low risk with IR, but EAIR revealed a spike in infections in patients who stayed on treatment beyond 18 months.

That’s why EAIR is becoming the gold standard. The FDA’s 2024 draft guidance proposes standardized methods for calculating it, and CDISC’s latest guidelines now require both IR and EAIR for serious adverse events in oncology trials. The goal? To make safety data reflect real-world use - not just trial math.

A statistician in a lab surrounded by flickering patient-year counters and a dominant EAIR formula.

Relative Risk: Comparing Safety Between Groups

Once you have your rates - whether IR, EIR, or EAIR - the next step is comparing them. That’s where relative risk comes in. If Drug A has an EAIR of 45 events per 100 patient-years and Drug B has 28, the incidence rate ratio (IRR) is 45 Ă· 28 = 1.6. That means Drug A has a 60% higher rate of the event per year of exposure.

But numbers alone aren’t enough. Confidence intervals tell you if that difference is real or just random noise. The FDA now requires these in submissions. If the 95% confidence interval for the IRR is 1.1 to 2.4, you can be confident the difference is statistically meaningful. If it’s 0.8 to 2.1, you can’t say for sure - the data is too shaky.

Statisticians use the Wald method for IRR confidence intervals. In R, functions like riskratio() handle this automatically. But many teams still make mistakes - 31% of initial analyses have date-handling errors, according to PharmaSUG. A wrong start date, a missing discontinuation record, or an unadjusted holiday period can throw off the whole calculation.

Competing Risks and Why Kaplan-Meier Fails

There’s another layer most people miss: competing risks. What if someone dies before they ever have the adverse event you’re tracking? In cancer trials, death is common. If you use the classic Kaplan-Meier method to estimate time until nausea, you’re assuming everyone stays at risk - even those who died. That overestimates risk.

A 2025 study in Frontiers in Applied Mathematics and Statistics showed that using Kaplan-Meier in these cases leads to inaccurate safety profiles. Instead, researchers now use cumulative hazard ratio estimation. This breaks down risk into separate hazards: one for death, one for the adverse event. It gives a truer picture of how likely an event is, given you’re still alive and on treatment.

A battlefield of medical curves: collapsing Kaplan-Meier towers vs. growing hazard ratios, with EAIR as a glowing symbol.

Real-World Challenges in Implementation

Even with clear science, putting this into practice is hard. A 2024 PhUSE survey found that SAS programmers spent 3.2 times longer building EAIR analyses than IR ones - 14.7 hours versus 4.5. Common errors? Incorrect event dates (28%), ignoring treatment interruptions (19%), and inconsistent patient-year math (23%).

Roche found that 35% of medical reviewers didn’t understand EAIR at first. They thought a higher number meant worse safety - not realizing it was per year of exposure. Training became mandatory. On the flip side, the PhUSE GitHub repository for standardized EAIR macros has been downloaded over 1,800 times. Teams using it reported an 83% drop in programming errors.

CDISC’s Therapeutic Area User Guide now mandates specific variables for exposure time and event counts. The FDA’s Biostatistics Review Template includes checklists for exposure calculation methods. If you can’t prove how you calculated patient-years, your submission gets flagged.

What This Means for You

If you’re reading a clinical trial summary, don’t just look at "X% had side effects." Ask: Over how long? Was it IR, EIR, or EAIR? If it’s IR, the data might be outdated. If it’s EAIR, you’re seeing a more accurate picture of real-world risk.

For patients, this matters because safety isn’t about whether a side effect happened - it’s about how likely it is to happen while you’re on the drug. A drug with 10% IR might seem safer than one with 15%. But if the first was given for 2 weeks and the second for 2 years, the 15% drug might actually be far safer per day of use.

The industry is moving toward transparency. The global market for clinical safety software hit $1.84 billion in 2023, growing at over 22% a year. More companies are using EAIR. More regulators are demanding it. And more tools are being built to make it easier.

What’s next? By 2027, experts predict 92% of Phase 3 drug submissions will include EAIR. The FDA’s Sentinel Initiative is even testing AI tools that auto-detect safety signals using exposure-adjusted data - and early results show a 38% improvement in early warning detection.

Understanding these methods isn’t just for statisticians. It’s for anyone who wants to know what the numbers really mean - and whether a drug’s safety profile is truly understood.

What’s the difference between incidence rate (IR) and exposure-adjusted incidence rate (EAIR)?

IR is the percentage of people who had an adverse event, regardless of how long they were on the drug. EAIR calculates events per unit of exposure time - usually per 100 patient-years - and accounts for how long each person actually took the drug, including interruptions. EAIR gives a more accurate picture of risk over time, especially in long-term trials.

Why did the FDA start requiring EAIR in 2023?

The FDA moved to EAIR because traditional IR methods were misleading in trials with varying treatment durations. Studies showed IR could underestimate true event rates by up to 37% when patients stayed on drugs longer than controls. EAIR accounts for actual exposure time, reducing misinterpretation of safety data and improving risk-benefit assessments.

Is EIR better than IR for all types of adverse events?

EIR is better than IR for recurrent events because it measures frequency per year of exposure. But it overstates risk if one person has many events - it counts events, not people. EAIR improves on EIR by adjusting for both recurrence and variable exposure, making it the most accurate for complex, long-term therapies.

What are competing risks in adverse event analysis?

Competing risks occur when another event - like death - prevents the observation of the adverse event you’re studying. For example, in cancer trials, patients may die before experiencing nausea. Using traditional methods like Kaplan-Meier overestimates risk because it assumes everyone remains at risk. Cumulative hazard ratio estimation separates death risk from adverse event risk to give a more accurate picture.

How do pharmaceutical companies implement EAIR in practice?

Companies use standardized SAS or R code based on CDISC ADaM datasets, with variables for exposure time, treatment start/end dates, and event counts. The PhUSE GitHub repository offers reusable macros that reduce programming errors by 83%. Key steps include validating exposure duration outliers, handling treatment interruptions, and ensuring event dates match protocol records. Training is critical - many reviewers initially misinterpret EAIR as "higher number = worse safety," not realizing it’s normalized per year.

14 Comments

  • Image placeholder

    Tom Forwood

    February 7, 2026 AT 21:26

    So basically, if you're on a drug for 5 years and get one headache, that's way less scary than someone who gets 5 headaches in 5 days? Makes sense. I always thought '15% had side effects' was the whole story. Turns out it's like saying '30% of cars have dents' without telling you if it was a parking lot or a highway crash. 😅

  • Image placeholder

    Simon Critchley

    February 8, 2026 AT 18:09

    IR is the pharmaceutical industry’s version of ‘trust me bro’ statistics. EIR? Still a band-aid. EAIR? Now we’re talking. You’re not just measuring events-you’re measuring *exposure-time-adjusted biological assault*. If your drug causes 12 vomiting episodes in one patient over 18 months, EAIR says: ‘Hey, that’s 67 events per 100 PY.’ But IR? ‘Oh cool, 1 person puked.’ That’s not safety data-that’s a joke with a FDA stamp. đŸ€Ą

  • Image placeholder

    Camille Hall

    February 9, 2026 AT 08:43

    I work in clinical ops and I can tell you-teams still mix up exposure windows. One guy thought ‘treatment interruption’ meant weekends. I had to explain that if someone goes on vacation for 3 weeks, that’s not ‘on drug time.’ EAIR sounds fancy, but the real win is when your data analyst stops guessing and starts counting properly. 🙌

  • Image placeholder

    Joshua Smith

    February 11, 2026 AT 01:02

    Just read this whole thing twice. I’m not a statistician, but this changed how I look at drug safety. If a drug has 20% IR for dizziness, but EAIR is 8 per 100 PY, that’s actually pretty mild. It’s not about how many people got dizzy-it’s how often, and for how long. Mind blown.

  • Image placeholder

    Elan Ricarte

    February 12, 2026 AT 23:33

    Let me get this straight-some companies are still using IR in 2025? Like, are they trying to get sued? I’ve seen trial reports where they hid the fact that placebo group dropped out at 6 months while the drug group was on for 4 years. IR makes that look equal. That’s not science. That’s fraud with a white coat. EAIR isn’t optional-it’s the bare minimum. And if your CRO can’t spit out EAIR? Fire them. đŸ’„

  • Image placeholder

    Ritteka Goyal

    February 13, 2026 AT 04:12

    Guys I work in pharma in India and we just switched to EAIR last month. It was chaos. My boss thought higher EAIR = more dangerous. I had to make a slide with a car analogy: ‘If you drive 100km and get 1 flat tire, that’s better than driving 10km and getting 1 flat tire.’ He finally got it. Now we’re using PhUSE macros. 83% fewer errors? YES. Also, we added a ‘holiday adjustment’ variable. Because Indians take long vacations. đŸ‡źđŸ‡łâœˆïž

  • Image placeholder

    Alex Ogle

    February 14, 2026 AT 10:31

    Competing risks? Oh man. I was in a cancer trial where we tracked nausea using Kaplan-Meier. Turns out, half the patients died before they even got dizzy. So our ‘nausea risk’ was 42%. Reality? More like 8%. We re-ran it with cumulative hazard ratios and the whole safety profile flipped. It’s not just math-it’s survival math. This stuff saves lives. Or at least, prevents false panic.

  • Image placeholder

    Lyle Whyatt

    February 16, 2026 AT 01:01

    As an Aussie, I’ve seen this play out in melanoma trials. We had a drug with 18% IR for rash. Looked bad. EAIR? 14 per 100 PY. Why? Because patients stayed on it for 3 years. Meanwhile, the comparator drug? 12% IR but only 4 months exposure. So EAIR was 32 per 100 PY. The ‘worse’ drug was actually safer per day. That’s the power of time-adjusted data. We presented it at the Sydney Biostats Summit. Got a standing ovation. 🇩đŸ‡ș👏

  • Image placeholder

    Ashlyn Ellison

    February 17, 2026 AT 14:03

    EAIR sounds complicated, but it’s just common sense. If you’re on a drug for 10 years and get one side effect, that’s not a red flag. If you get it every week? That’s a problem. Stop hiding behind percentages. Just tell me: how often does it happen, per year? That’s all I need.

  • Image placeholder

    Ken Cooper

    February 18, 2026 AT 12:38

    Just want to say-thank you for writing this. I’m a med student, and I was so confused by trial data. Now I know to ask: ‘What’s the EAIR?’ and ‘What’s the CI?’ I printed this out and stuck it on my fridge. My roommate thinks I’m weird. I don’t care. This is the stuff that matters.

  • Image placeholder

    John McDonald

    February 19, 2026 AT 04:12

    Biggest mistake I’ve seen? People think EAIR = ‘more events = worse.’ Nope. It’s ‘events per year.’ A higher number means more frequent, not necessarily more dangerous. We trained 40 reviewers last quarter. Half of them thought a 75 EAIR was ‘terrifying.’ It was 75 events over 1000 patient-years. That’s 7.5% per year. Not scary. Just
 frequent. We made a meme: ‘EAIR isn’t scary. Your math is.’

  • Image placeholder

    MANI V

    February 20, 2026 AT 00:41

    Why are we even discussing this? The real issue is that pharma companies manipulate exposure time. They let people drop out early if they get side effects. Then they say, ‘Look, only 5% had nausea!’ But if you remove the people who quit because they were sick, you’re lying. EAIR can’t fix that. You need to audit the raw data. Not the math. The honesty.

  • Image placeholder

    Monica Warnick

    February 21, 2026 AT 21:35

    I cried reading this. Not because I’m emotional-because I’ve spent 8 years fighting to get EAIR adopted. I was told ‘too complicated.’ ‘Too expensive.’ ‘No one cares.’ I made a 37-slide deck with cats. One cat = 1 patient-year. It went viral in our internal Slack. Now we have a mascot. His name is Mr. PY. He has a tiny calculator. We love him. 💔

  • Image placeholder

    Chelsea Deflyss

    February 23, 2026 AT 12:53

    EAIR? Pfft. I’ve seen trials where they didn’t even log when patients stopped. One guy was ‘on drug’ for 12 years but only took it for 4. They didn’t know. So EAIR was 120 events per 100 PY. Reality? 40. They published it. I reported it. They got fined. I got a thank you email from the FDA. I’m not a hero. I just didn’t shut up.

Write a comment