Loading...
Automated Artificial Intelligence
Automated AI: execution under certainty
Automated AI represents a fundamental paradigm shift in how we conceive and deploy artificial intelligence systems in enterprise environments. While most organisations continue to focus on developing isolated models, treating each algorithm as a standalone piece of the technology puzzle, Automated AI proposes a radically different vision: the complete orchestration of the artificial intelligence cycle, from the moment data enters the system through to the execution of concrete actions in the real world.
This approach acknowledges an uncomfortable but essential truth about the current AI landscape. The real value does not lie in the sophistication of the models themselves, nor in their accuracy metrics in controlled environments. True value emerges when these systems can drive well-founded decisions and concrete actions that transform business operations. It is the difference between having a Formula 1 engine sitting in a garage and building a complete vehicle capable of winning races.
The architecture underpinning this vision is built on four fundamental pillars that work in harmony. The first component establishes end-to-end data pipelines, automatically managing the entire process from initial ingestion through to deployment in production systems. The second maintains human governance mechanisms that ensure efficient oversight, guaranteeing the continuous involvement of human judgement in critical decisions. The third enables direct integration with existing operational infrastructure, allowing connection to enterprise workflows without the need for costly redesigns. Finally, the fourth orchestrates different types of models, coordinating symbolic and sub-symbolic capabilities according to the specific requirements of each task.
Foundations: deterministic neurosymbolic reasoning
The central concept in the neurosymbolic approach is deterministic inference based on knowledge graphs, which refers to AI systems that derive conclusions through logical rules encoded in an ontologically interpreted knowledge graph structure. The key aspect of this approach is that, because the rules are explicitly programmed into the ontology, the inference is deterministic: given the same inputs, the system will always produce the same output by following logically valid connections and reasoning rules, regardless of the complexity of the knowledge domain represented in structured or unstructured states or records.
This approach allows domain experts to encode decision logic directly into the ontology or network of ontologies that model the relevant knowledge graph or graphs. Regulatory criteria, compliance checklists, product eligibility rules, inclusion or exclusion criteria, and other critical elements of domain knowledge are represented as nodes and relationships determined by the ontology and the universe of relationships, axioms, and rules it contains. They therefore reflect a deterministic structure of the set of decisions that can be made on the data generated by the domain and consolidated in its knowledge graph.
Neurosymbolic architecture: designed for critical decisions
Generative AI systems operate without a model of the world. They predict, generate, approximate — but they do not know what they know. This limitation is not technical; it is structural. For any process where errors have real consequences, statistical prediction is not enough. Reasoning is required.
Our architecture integrates three fundamental components into a unified system: a semantic layer based on formal ontologies, a knowledge layer materialised in graphs, and a cognitive layer that orchestrates connectionist AI services — LLMs, vision models, classifiers — under logical supervision.
1. Semantic layer: the ontology as contract
The ontology is not documentation. It is the formal contract that defines which entities exist in the domain, which relationships between them are valid, and which rules govern their behaviour. It encodes business logic in a language that machines can execute and auditors can verify.
When a regulator asks "why did the system make this decision?", the answer cannot be "because the model predicted it". The ontology makes it possible to respond with a traceable inference chain: these premises, these rules, this conclusion.
2. Knowledge layer: the graph as operational memory
The knowledge graph instantiates the ontology with real data. Each node is a typed entity; each edge, a relationship validated against the ontological schema. The graph does not store information — it stores structured knowledge that an inference engine can traverse. This layer enables queries that no relational database can resolve: "which clients have indirect exposure to sanctioned suppliers through their subsidiaries?" The graph responds in milliseconds because the structure of the question is encoded in the structure of the data.
3. Cognitive layer: LLMs under symbolic supervision
Large language models bring capabilities that symbolic systems cannot replicate: natural language understanding, fluent text generation, analysis of unstructured documents. But they operate without guardrails — they hallucinate, contradict, invent.
In our architecture, LLMs never make critical decisions. They act as an input interface (interpreting natural language queries and extracting entities from documents) and as an output interface (drafting readable explanations of the system's conclusions). Between these two ends, the reasoning is symbolic: verifiable, reproducible, auditable.
Integration patterns
- Restrictive mode: the knowledge graph is the sole decision-maker. The LLM translates the user's query into a formal query, the inference engine executes it against the graph, and the LLM verbalises the result. Hallucinations are structurally impossible because the LLM generates no substantive content — it only translates.
- Supervised mode: the LLM generates a complete response that the symbolic engine validates before delivering it. Each factual claim is checked against the graph; each inference is verified against the ontological rules. Anything that cannot be validated is removed or flagged as a hypothesis.
Why does this matter?
Purely connectionist systems scale well and fail gracefully. Purely symbolic systems are precise but brittle in the face of real-world ambiguity. Neurosymbolic architecture is not a compromise between the two: it is a synthesis that produces capabilities neither can achieve alone — perceptual flexibility with logical guarantees, scalability with traceability, automation with control.
Artificial cognition and automated AI
Automated AI with its symbolic core does not exist in conceptual isolation. It aligns naturally with the broader concept of Artificial Cognition, providing the operational infrastructure necessary for cognitive systems to function at enterprise scale. It is not simply about building intelligent systems; it is about building systems that can operate reliably in the real world, making critical decisions whilst maintaining human oversight and control.
This architecture materialises symbolic-subsymbolic integration by implementing concrete mechanisms that allow different types of models and approaches to work in concert. It is not a competition between paradigms; it is a symphony in which each instrument contributes its unique strengths to the final outcome.
Crucially, Automated AI adds action and decision-making capacity to cognitive systems, closing the loop between perception, understanding, and action. A system may be extraordinarily intelligent, but if it cannot translate that intelligence into concrete, verifiable actions, its practical value remains limited.
Finally, and perhaps most importantly, it maintains human control over automated systems. This is not a concession to current technological limitations; it is a fundamental design principle that recognises Artificial Cognition must operate within a framework of shared common sense with people. Automation is not about replacing human judgement, but about augmenting it with processing and reasoning capabilities that exceed human capacity whilst remaining under human oversight and control.
The future of automated AI: towards greater formal reasoning
The field of Automated AI is at a fascinating inflection point. Current research trends are not pointing towards the abandonment of symbolic reasoning in favour of ever-larger models, but towards an increasingly sophisticated integration of symbolic and sub-symbolic capabilities.
Differentiable compositionality marks a significant advance by fusing the optimisation of deep learning with the structural and formal advantages of logic programming. This gives rise to systems capable of learning and reasoning simultaneously, adapting to new data whilst preserving verifiable properties.
Meanwhile, formal verification applied to learning systems is rapidly moving from a theoretical goal to a practical possibility. Methods now exist that offer formal guarantees on the performance of machine learning systems, even when these are too complex for direct human analysis.
Automated causal reasoning is fundamentally changing the way artificial intelligence understands and intervenes in causal relationships. As a result, AI systems do not merely detect correlations — they can anticipate the effects of different interventions, strengthening decision-making in complex contexts.
Symbolic planning supported by learning techniques is an elegant integration in which learning optimises the efficiency of classical symbolic planning methods. Learning identifies useful heuristics and helps reduce search spaces, whilst the symbolic component ensures the validity of the plans generated.
In summary, Automated AI, with inferential symbolic reasoning at its core, goes far beyond a passing trend or technological fad. It is an essential pillar for building powerful and reliable, controllable and auditable, autonomous and transparent artificial intelligence systems. It is, ultimately, the indispensable link between theory and the practical application of AI in critical business operations, enabling safe and effective transformation.