The walls of the DEKRA office are lined with a myriad of books and technical references. Each one contains the distilled knowledge of countless hours of work, yet many will never be opened again. As we move further into the age where all information is expected to be available whenever we so much as request it, I worry that the information trapped within these books and the minds of more senior engineers may fade into obscurity as a new generation of engineers develops.
As someone born in the generational hysteresis between Millennials and Gen-Z, I can just about remember life pre-computer. Phones were bricks, and computers lived in their own room in the house, but even then, the internet was quickly overtaking “traditional media”.
Now, using AI is quickly becoming the default position for many tasks, whether that be drafting a process procedure, producing an advertisement, or rewriting something that was perfectly adequate to begin with. A recent study by the Higher Education Policy Institute states that 94% of university students report using generative AI platforms to produce assessed work. A similar study conducted by MIT found that students who regularly used AI showed less brain activity during writing tasks, even when they were not using AI.
This trajectory suggests that the integration of AI into our lives is no longer optional, but inevitable. As these tools become embedded in everyday workflows, choosing not to use them increasingly equates to accepting reduced efficiency and competitiveness. The expectation is already shifting from whether AI should be used to how well it can be applied.
This reliance introduces a clear risk: as outputs are generated rather than derived, the depth of thinking and understanding behind them diminishes. Over time, this could erode the critical thinking and practical understanding that underpin good engineering practices. This is especially true for process safety, as many standards are locked behind paywalls and proprietary guidance, limiting the reliability of AI-generated outputs. As such, AI operates without access to much of the underlying authoritative material. DEKRA has tested the use of AI for developing hazardous area classifications and found that while outputs may at first appear convincing, they range from technically unsubstantiated to dangerously inadequate. In one instance, AI recommended a large zone 0 surrounding an outdoor pump, which contradicts fundamental ventilation principles, simply because it misinterpreted the wording of the prompt. The issue here is that because they look convincing, someone without sufficient knowledge on the topic could choose not to question any output.
In contrast, traditional literature offers a level of rigour and permanence that remains difficult to replicate. A well-written process safety manual is not simply a repository of information, but a structured body of knowledge which has been developed, reviewed, and refined over time. Unlike AI-generated outputs, which by definition lack transparency in their reasoning, books provide traceability: assumptions are stated, methodologies are justified, and conclusions are grounded in established principles. In process safety, where errors carry significant consequences, this reliability is not a luxury but a legal necessity. This does come with the caveat that, unlike AI, the information in a textbook cannot change. This can lead to situations where critical guidance becomes outdated long before the next revision cycle, creating a widening gap between documented best practice and emerging industry knowledge.
Moreover, engaging with technical literature demands a slower, more deliberate form of thinking. The process of interpreting, questioning, and applying information helps foster a
deeper understanding than passive consumption. While AI may provide rapid answers, traditional media cultivates the judgment required to assess whether those answers are correct. I would certainly have more faith in the competence of someone who has studied a single book on risk assessment than a person who has spent the same amount of time consuming generated content on it.
Despite these limitations, it is equally important to recognise where AI offers genuine value. It is not limited to generating reports; it also acts as a gateway to knowledge that, only a few decades ago, may have been largely inaccessible. DEKRA has observed a growing client expectation for technical excellence, reflected in the increasing depth and quality of the questions we receive. This shift suggests that the traditional gatekeeping of hazardous area classification as a “black art” confined to specialist texts is no longer acceptable. Under DSEAR, the need for an informed and competent customer is explicit. While AI cannot provide all the answers to make a person competent, it can play a key role in prompting the right questions. This is without even mentioning the wide range of other applications for AI within process safety. From predicting process failure far before any instrument could, to supporting complex scenario modelling that would once have required specialist expertise and days of computing time. These capabilities also demand robust human oversight: without informed review, there is a risk that AI‑generated predictions may be misapplied, misunderstood, or over‑trusted in safety‑critical environments.
Ultimately, the question is not whether AI or traditional media should prevail, but how the two can coexist without undermining one another. The accessibility and adaptability of AI have the potential to democratise knowledge, breaking down barriers that once confined critical understanding to textbooks and experienced professionals. At the same time, the depth, scrutiny, and reliability of traditional sources remain essential in ensuring that this knowledge is applied correctly.
If the books in our offices are to become less frequently opened, their value must not be diminished, and neither must the expertise required to interpret what they contain. The goal should be not simply to work faster, but to demonstrate and uphold competence. This ensures that, like the invention of the computer itself, AI is used as a companion to rather than a substitute for understanding. In doing so, we avoid a future where information is abundant, but understanding is shallow.
This article was written by a human and reviewed with the assistance of AI.
References:
https://www.hepi.ac.uk/reports/student-generative-ai-survey-2026/

