Cookies policy

GNOSS usa cookies, propias y de terceros, con finalidad principalmente técnica y necesaria para prestación de nuestros servicios y mostrarles contenido relevante. Más información en nuestra política de cookies.

ACEPTA para confirmar que has leído la información y aceptado su instalación. Puedes modificar la configuración de tu navegador.

The conversational assistant that combines the structural robustness and precision of knowledge graphs with the interpretive and generative capabilities of large language models.

Lokut

How does it work?

Why organisations choose Lokut

Generative agents work exclusively with the propositions derived from the graph. They add no external information. They do not speculate. If the graph does not contain a piece of data, the response does not include it.

Every statement has an exact origin in the graph

If the user requests full traceability, the system can display the genealogy: which triple supports which fragment of the response, which original source contributed each triple, and when each piece of data was last updated.

Lokut's differentiating advantage

A traditional LLM asked the same question might answer correctly if that specific information was in its training corpus. Or it might hallucinate a plausible but non-existent painting. There is no way to verify which.

A traditional RAG system would retrieve documents about Baroque painting, allegories of the senses, and still life. It would use the LLM to synthesise those documents into a response — with a high probability of imprecision or of mixing information from multiple paintings.

Lokut, by contrast, retrieves specific structured knowledge, reasons over verifiable relationships, and generates a response anchored in data that can be audited triple by triple.

Lokut doesn't approximate. It doesn't speculate. It doesn't make things up. It reasons over real knowledge.

1

Neurosymbolic architecture

Lokut implements an architecture that hybridises knowledge graphs semantically interpreted by ontologies with state-of-the-art language models. This is not about retrieving text passages through vector similarity. The system reasons over explicit relational structures expressed in W3C standards: OWL ontologies for conceptual modelling, RDF graphs for data representation, SPARQL queries for precise retrieval, and description logic for automated reasoning.

This neurosymbolic architecture combines the interpretive capability of language models with the logical precision of structured knowledge-based systems. The result is a system that maintains conversational fluency without sacrificing verifiability.

2

Epistemic traceability

Every statement generated by Lokut has a verifiable genealogy. We are not talking about "explainability" in the vague sense of the term as applied to neural networks. We are talking about structural traceability: every proposition in the response can be traced back to the specific triples in the knowledge graph that underpin it.

The system exposes the complete reasoning path: from the natural language query, through the translation into a formal query, the execution against the graph, to the final linguistic synthesis. This transparency is not a bolt-on feature. It is a direct consequence of the architecture.

3

Compliance with European legal standards by design

Lokut's architecture does not adapt to regulatory requirements through patches — compliance is inevitable, given the nature of the system. When every decision is anchored in structured knowledge and every step of the reasoning is observable, external auditing becomes a straightforward technical exercise, not forensic archaeology of black boxes.

The system meets the requirements of the EU AI Act for high-risk systems not because it implements compliance checklists, but because the architecture makes transparency, data governance, and human oversight emergent properties of the design.

Lokut, the conversational engine of GNOSS
Semantic AI Platform

Other cognitive and AI services