When AI Meets the Rulebook: Interpretation and Implementation of Safety Standards in the age of Artificial Intelligence

The rapid development of Large Language Models (LLMs) and AI tools is increasingly changing the way technical documents and regulatory standards are interpreted and applied. What is already possible today – and where clear limitations remain – affects key areas of Lorit Consultancy day-to-day work as well: from functional safety according to ISO 26262, to medical standards such as IEC 60601/ 62304, to quality management systems like ISO 9001, IATF 16949, or risk management according to ISO 14971.
This blog draws on practical experience and concrete examples to explore how reliably artificial intelligence interprets normative texts, where typical sources of error may lie, what risks can arise from incorrect interpretations – and where AI can indeed help.
Modern AI models can analyse large amounts of text and present the results in a structured way. When working with standards, this means:

However: AI does not “understand” standards in a legal-technical sense. It recognizes statistical patterns in wording, not the underlying formal logic. Reliability therefore depends heavily on input quality, context, and the specific version of the standard.
Thus, while AI often delivers good explanatory outputs – for example on the structure of the ISO 26262 safety lifecycle – it can make mistakes in precise requirements such as ASIL decomposition, hardware architectural metrics (ISO 26262), or detailed rules in standards like IEC 61010-1.
One recurring issue with common AI tools is that they can generate plausible-sounding but incorrect normative requirements, such as:
This is particularly risky in safety and medical domains. For example, I recently asked a widely used AI tool to explain a table with electrical clearances and voltages from a standard, and the result was disappointing: the table and values were interpreted completely incorrectly. So, caution is advised here!

Because many standards are not publicly accessible (except through paid licences), AI models are frequently trained on secondary sources such as online articles, presentations, or forum discussions. This can lead to:
Inaccurate or leading prompts can unintentionally bias the response. Models also generate answers based on probabilities – not on absolute truth.
Wrong or imprecise prompts may result in:
Complex relationships, mathematical challenges such as those in FMEDA or SPFM/LFM calculations, and iterative models like HARA → FSC → TSC can exceed the sequential reasoning capabilities of many AI tools. Even modern tools may misinterpret tables, metrics, or calculations.
Soft errors are well known in informatics as errors that cause temporary, unintentional changes in logic circuits or memory states (see our blog Are we going soft on errors?).
Such temporary changes can affect system reliability, and AI systems are theoretically not immune to these events either.

Dijaz Maric, QM & Reliability Engineering Consultant
If this topic resonates with challenges in your organisation, feel free to get in touch. We’re here to help translate standards, risks and technology into workable solutions.
Contact us for bespoke consultancy or join one of our upcoming online courses.
Learn moreIncorrect AI-generated outputs are not merely a quality issue – they can create real and tangible risks:
If an incorrect AI assumption influences a safety concept (e.g., a wrong ASIL allocation), it may lead to insufficient safety measures.
Standards are often part of contractual requirements. Misinterpretations can lead to non-compliance, audit deviations, or even liability issues.
In the medical device industry, AI-generated risk assessments according to ISO 14971 may be misleading if essential parameters are missing or incorrectly linked.
Wrong AI outputs in quality management (e.g., incorrect process requirements) can undermine the consistency of a quality management system.
| Use Case | Assessment |
| Summarising sections and providing initial orientation | Yes |
| Explaining technically complex concepts | Yes, with caution |
| Reformatting long texts | Yes, but risk of misinterpretation remains |
| Deciding on conformity/compliance | No – human judgement required |
| Detailed safety interpretations and risk analyses (e.g., FMEA) | No – clear limitations |
| Legally binding interpretations (standards often have legal impact) | No – requires human expertise |
| Handling unstructured or sensitive data | Rather no – high risk of misinterpretation |
Public AI tools can support certain aspects of standards interpretation, especially when a quick overview is required. However, when it comes to detailed, safety-critical or compliance-relevant topics, AI does not automatically increase efficiency.
In the end, expertise determines the correctness and reliability of normative implementation.
By Dijaz Maric, Quality Management & Reliability Engineering Consultant
Whether you’re navigating functional safety, medical compliance, cybersecurity, or QMS topics, our team works across these areas daily and can support you in applying standards confidently and effectively.
Contact us at info@lorit-consultancy.com or via a contact form for bespoke consultancy or join one of our upcoming online courses.