
Standard Elements in Studies of Adverse Events and Medical Error: the SESAME statement
This article introduces the SESAME (Standard Elements in Studies of Adverse Events and Medical Error) Statement, a new reporting guideline designed to improve the transparency, consistency, and reproducibility of research focused on adverse events, near-misses, and medical errors. Developed by an international, multidisciplinary group, the statement responds to long-standing problems in patient safety research, including inconsistent definitions of harm, variable methods for identifying and characterizing events, and poor reporting of causation, preventability, and reliability. The authors outline the rationale for SESAME, describe its development process in alignment with EQUATOR Network standards, and present a structured checklist of core reporting elements applicable across diverse study designs. By standardizing how adverse events and medical errors are defined, detected, and reported, the SESAME Statement aims to enable more meaningful comparisons across studies, strengthen systematic reviews and meta-analyses, and ultimately support more effective patient safety improvement efforts.

Problem with the existing reporting standards for adverse event and medical error research.
This BMJ Quality & Safety article argues that current research reporting guidelines inadequately support studies focused on adverse events, near-misses, and medical errors, leading to inconsistent methods, poor transparency, and limited comparability across patient safety research. After reviewing 12 relevant EQUATOR Network reporting guidelines, we show that none comprehensively address eight core elements they identify as essential for high-quality adverse event reporting—such as clear definitions of harm, standards for causation and preventability, reviewer training, fidelity to detection methods, and reliability of data abstraction. Many guidelines include none or only a few of these elements, confirming a major gap in existing standards. To address this, the authors describe the rationale and conceptual framework for the developing Standard Elements in Studies of Adverse Events and Medical Error (SESAME) guideline, which aims to improve transparency, reproducibility, and interpretability across diverse study designs and ultimately strengthen the evidence base for patient safety improvement.
Conceptual Flow Diagram for Retrospective Studies

This flow diagram depicts the base process and elements that should be described for typical retrospective reviews in studies of adverse events and medical error. We highlight a number of features we felt were important for investigators to detail in manuscripts in order to enhance transparency and facilitate comparisons.
- Starting with data inputs, investigators should detail the sources of data they are using. This may include event reporting systems, use of trigger tools, review of cases meeting specific screening criteria, or electronic data feeds.
- Investigators should specify the scope of their reviews. This includes what kinds of encounters are within scope, the time frame for capture of an adverse event following some index encounter, and whether acts of omission are included.
- Once a putative safety event review is under way, investigators should specify their review process. This includes weather they use a single reviewer or a 2-tiered review and whether they use dual first-level reviewers as is recommended by the Global Trigger Tool or whether they use a single first level reviewer. They should also specify whether the review process uses implicit or explicit review or some algorithm in making determinations about adverse events. Finally, the process for selecting reviewers, their qualifications, experience and training should be explicit.
- The first question in review is whether the event was caused by or contributed to by health care or whether it was due to a patient’s underlying condition. In doing so the investigator should specify the standard of causation used. This may be along the lines of the Harvard Medical practice study which requires that an event be “caused by or due to health care” and uses a 6-tier scale to reflect the confidence in this determination of causation. Alternately investigators may use an approach along the lines of the Global Trigger Tool whose standard is that an event was “resulting from or contributed to by healthcare.”
- If an event was felt to be caused by or contributed to by health care the next question is whether it reached the patient. If not then this would be considered a near miss. Though investigators may use different severity scales the NCC MERP index for near misses includes unsafe conditions and intercepted events in this category.
- If an event did reach the patient, the next question is whether this caused physical harm. If not then this would be considered a non harm event. This also includes events where monitoring or intervention were required in order to prevent harm.
- If physical harm occurred this is considered an adverse event. Investigator should include a description of how adverse events are summarized. This may include rates what or ratios. Investigators should also specify whether they captured each AE separately or whether they considered cascading events as a single occurrence.
- For AEs and for non harm events if these are included investigators should describe harms by type, severity or disability, and specify whether they attempt to describe preventability or mitigatability. If so, they should specify which scales they use for severity, the taxonomy used for AE or event type, and as relates to preventability, the definition used, any scales used, and if scales are used the specific terminology used for different tiers.
- Determinations of adverse events and preventability should include assessments of interrater reliability.
Use of artificial intelligence might flow from electronic data inputs and incorporate the process described above to arrive at a description of events or prediction of events and Oregon inclusion of recommended actions. Use of artificial intelligence should increase a description of the data set used for training, algorithms and approaches used and whether this is descriptive or predictive.
