When AI Meets the Rulebook: Interpretation and Implementation of Safety Standards in the age of Artificial Intelligence

Opportunities, Risks and Limitations of AI when it comes to reading the Standards

The rapid development of Large Language Models (LLMs) and AI tools is increasingly changing the way technical documents and regulatory standards are interpreted and applied. What is already possible today – and where clear limitations remain – affects key areas of Lorit Consultancy day-to-day work as well: from functional safety according to ISO 26262, to medical standards such as IEC 60601/ 62304, to quality management systems like ISO 9001, IATF 16949, or risk management according to ISO 14971.

This blog draws on practical experience and concrete examples to explore how reliably artificial intelligence interprets normative texts, where typical sources of error may lie, what risks can arise from incorrect interpretations – and where AI can indeed help.

How reliably does AI interpret complex normative requirements?

Modern AI models can analyse large amounts of text and present the results in a structured way. When working with standards, this means:

  • AI can logically organise chapters
  • It can summarise complex correlations (though not always in a truly logical way)
  • It can explain terminology and cross-references better than many traditional search tools
  • It provides an initial orientation regarding scope and key requirements

However: AI does not “understand” standards in a legal-technical sense. It recognizes statistical patterns in wording, not the underlying formal logic. Reliability therefore depends heavily on input quality, context, and the specific version of the standard.

Thus, while AI often delivers good explanatory outputs – for example on the structure of the ISO 26262 safety lifecycle – it can make mistakes in precise requirements such as ASIL decomposition, hardware architectural metrics (ISO 26262), or detailed rules in standards like IEC 61010-1.

Typical sources of error

  1. Incorrect output

One recurring issue with common AI tools is that they can generate plausible-sounding but incorrect normative requirements, such as:

  • fictional chapters
  • non-existent tables
  • incorrectly interpreted safety limit/threshold values

This is particularly risky in safety and medical domains. For example, I recently asked a widely used AI tool to explain a table with electrical clearances and voltages from a standard, and the result was disappointing: the table and values were interpreted completely incorrectly. So, caution is advised here!

  1. Outdated or incomplete data sources

Because many standards are not publicly accessible (except through paid licences), AI models are frequently trained on secondary sources such as online articles, presentations, or forum discussions. This can lead to:

  • mixing of old and new versions of standards
  • interpretations based on online commentary rather than the original text
  • missing details, particularly those found in annexes
  1. Human-influenced errors

Inaccurate or leading prompts can unintentionally bias the response. Models also generate answers based on probabilities – not on absolute truth.

Wrong or imprecise prompts may result in:

  • misinterpretations
  • overly broad or overly complicated answers
  • mixing of unrelated topics or standards
  • “embellished” statements if the user implicitly nudges the model in a certain direction
  1. Technical limitations and bugs

Complex relationships, mathematical challenges such as those in FMEDA or SPFM/LFM calculations, and iterative models like HARA → FSC → TSC can exceed the sequential reasoning capabilities of many AI tools. Even modern tools may misinterpret tables, metrics, or calculations.

  1. Soft errors

Soft errors are well known in informatics as errors that cause temporary, unintentional changes in logic circuits or memory states (see our blog Are we going soft on errors?).

Such temporary changes can affect system reliability, and AI systems are theoretically not immune to these events either.

Dijaz Maric, QM & Reliability Engineering Consultant

If this topic resonates with challenges in your organisation, feel free to get in touch. We’re here to help translate standards, risks and technology into workable solutions.

Contact us for bespoke consultancy or join one of our upcoming online courses.

Learn more

Risks arising from incorrect interpretation of technical and safety-critical standards

Incorrect AI-generated outputs are not merely a quality issue – they can create real and tangible risks:

  1. Safety risks

If an incorrect AI assumption influences a safety concept (e.g., a wrong ASIL allocation), it may lead to insufficient safety measures.

  1. Contractual and audit risks

Standards are often part of contractual requirements. Misinterpretations can lead to non-compliance, audit deviations, or even liability issues.

  1. Incorrect risk assessments

In the medical device industry, AI-generated risk assessments according to ISO 14971 may be misleading if essential parameters are missing or incorrectly linked.

  1. Quality degradation

Wrong AI outputs in quality management (e.g., incorrect process requirements) can undermine the consistency of a quality management system.

Where AI makes sense – and where it is better left out

Use Case Assessment
Summarising sections and providing initial orientation Yes
Explaining technically complex concepts Yes, with caution
Reformatting long texts Yes, but risk of misinterpretation remains
Deciding on conformity/compliance No – human judgement required
Detailed safety interpretations and risk analyses (e.g., FMEA) No – clear limitations
Legally binding interpretations (standards often have legal impact) No – requires human expertise
Handling unstructured or sensitive data Rather no – high risk of misinterpretation

Conclusion

Public AI tools can support certain aspects of standards interpretation, especially when a quick overview is required. However, when it comes to detailed, safety-critical or compliance-relevant topics, AI does not automatically increase efficiency.

In the end, expertise determines the correctness and reliability of normative implementation.

By Dijaz Maric, Quality Management & Reliability Engineering Consultant

Whether you’re navigating functional safety, medical compliance, cybersecurity, or QMS topics, our team works across these areas daily and can support you in applying standards confidently and effectively.

Contact us at info@lorit-consultancy.com or via a contact form for bespoke consultancy or join one of our upcoming online courses.

CONTACT

Form

We look forward to hearing from you.

    Show privacy policy