How Intended Use Shapes Your SaMD Classification and FDA Pathway

As software takes over more clinical functions that were once performed by hardware, the definition of Software as a Medical Device, or SaMD, has expanded across nearly every area of healthcare. Modern SaMD tools can screen for chronic diseases, triage patients, analyze imaging data, track physiologic signals, and increasingly incorporate artificial intelligence to guide clinical decision making. This shift has opened the door to faster innovation, but it has also created challenges for manufacturers who must demonstrate that these digital tools are safe, reliable, and appropriate for their intended role in patient care.

For first-time founders in MedTech, few decisions have more influence on your regulatory pathway than how you define the intended use of your software and how you classify its risk. These two concepts determine your FDA pathway, the evidence you need, the depth of your documentation, and the expectations regulators will apply during their review. Getting this right early can save months of time, avoid expensive redesigns, and prevent an accidental leap into a higher risk category.

What Counts as SaMD? A Quick refresher

Most regulatory agencies follow the IMDRF definition of SaMD: software that performs a medical function on its own, without being part of a hardware medical device.

Common examples include software that:

  • Analyzes medical images to detect abnormalities

  • Classifies physiologic signals to support diagnosis

  • Calculates risk scores for clinical conditions

  • Recommends treatment options or guides therapy

  • Processes biometric data from wearables in a clinically meaningful way

On the other hand, software that supports administrative tasks, billing, scheduling, or workflow tracking is not considered SaMD. General wellness apps that provide lifestyle guidance without a medical purpose also fall outside the definition.

AI-enabled tools can fall into either category depending on what they claim to do. A predictive algorithm that identifies sleep stages is often not SaMD. A predictive algorithm that flags potential respiratory distress very likely is.

Why Intended Use Is the First Step in SaMD Classification

Your intended use statement is a brief description of what your SaMD does, who it is for, and the clinical purpose it serves. Although it is usually only one or two sentences, it determines:

  • Whether your product is a medical device at all

  • FDA risk classification

  • Your regulatory pathway (510(k), De Novo, or PMA)

  • Clinical evidence requirements

  • Verification and validation expectations

  • Applicability of standards like IEC 62304, ISO 14971, and ISO 42001

  • Whether your AI model poses higher regulatory concern

For AI-driven systems, intended use statements also shape how FDA evaluates model behavior, training data, bias, drift management, and whether the algorithm is considered locked or adaptive.

A poorly defined intended use can inadvertently elevate your device into a higher risk category or require unnecessary clinical data. A narrow, well-controlled intended use keeps you on the fastest and most appropriate path.

Using a Structured Framework to Categorize SaMD Risk

Regulators evaluate SaMD not only by what it does, but also by the clinical situations in which it is used. To help manufacturers classify risk consistently, the IMDRF created a four-tier framework that considers:

  1. The significance of the information the software provides

    • Does the software inform, drive, or directly diagnose and treat?

  2. The clinical condition or scenario in which the information is used

    • Is it non-serious, serious, or critical?

Using these criteria, SaMD falls into one of the following categories:

  • Category IV (very high impact): Software that diagnoses or treats conditions in critical scenarios, where incorrect output could result in immediate and severe harm.

  • Category III (high impact): Software that drives clinical decisions for serious conditions, where harm from incorrect recommendations is still significant.

  • Category II (medium impact): Software that informs clinical management in non-serious contexts.

  • Category I (low impact): Software that provides supplemental information for non-serious situations, where risks are minimal.

AI systems often move into higher categories more quickly because of their increased influence on clinical decision making and the difficulty of fully characterizing model behavior.

Correctly applying this framework early helps you understand your regulatory obligations and assemble the right evidence before you ever speak to a regulator.

Special Considerations for AI-Enabled SaMD

AI brings new capabilities to software, but it also introduces new risk factors. When AI influences your intended use statement or your risk categorization, regulators evaluate additional aspects such as:

  • Training and test data quality

  • Representativeness and potential bias

  • Explainability and transparency

  • Performance in edge cases

  • Model drift and retraining strategy

  • Human oversight and fallback mechanisms

For example, a model that “assists clinicians in identifying abnormal cardiac rhythms” is very different from a model that “automatically detects and labels arrhythmias.” The former may be eligible for Category II or III, while the latter might fall into Category III or IV, requiring more rigorous testing and regulatory scrutiny.

In short, AI amplifies both capability and regulatory responsibility.

Why Intended Use and Risk Category Determine Everything That Comes Next

Once intended use and risk category are established, your regulatory pathway becomes much clearer:

  • Low-risk SaMD (Category I or II) is more likely to fit into a 510(k) pathway if a predicate exists.

  • Novel moderate-risk SaMD may require a De Novo classification before it can enter the 510(k) ecosystem.

  • High-risk SaMD with direct diagnostic or treatment functions often requires PMA.

From there, your classification drives requirements for:

  • Software lifecycle documentation (IEC 62304)

  • Risk management (ISO 14971)

  • Cybersecurity controls and threat modeling

  • Human factors testing (IEC 62366)

  • AI governance and transparency (ISO 42001)

  • Verification and validation activities

  • Clinical evidence, if applicable

This is why founders should define intended use early and treat it as part of product strategy, not an afterthought.

In conclusion

For any SaMD or AI-enabled product, defining intended use and understanding risk categorization are foundational steps that shape your entire regulatory plan. These decisions influence your FDA pathway, required evidence, documentation expectations, and the depth of your quality processes. By applying a structured risk framework and being deliberate about how you describe your software’s role in clinical decision making, you can avoid unnecessary regulatory burdens and move toward market faster.

If you need help shaping your intended use statement, categorizing risk, or planning your SaMD documentation, Unigen can guide you through each step so you avoid delays and build a product that is safe, compliant, and ready for submission. Contact us for a consultation.

Previous
Previous

Choosing the Right Predicate Device: The Most Important Step in Your 510(k) Strategy

Next
Next

Medical Device Regulation 101: How to Prepare for (and Survive) FDA Approval