Loading...
Auditable AI
Commitments to the future
The confusion surrounding AI and ethics: what do these concepts actually mean?
To begin with, the ambiguity surrounding what constitutes AI gives us pause. So does the question of what ethics actually is. Let us start with the former. In broad terms, the phrase "artificial intelligence" has been used to refer to a set of technologies ranging from statistical computing to artificial neural networks and knowledge graphs. Much of what we described ten years ago as "big data" or "predictive analytics" has since been rebranded as AI. Automation and AI are also frequently used interchangeably, despite the fact that there are significant categorical distinctions between the two fields. Automation seeks to mechanise human activities, sometimes using AI tools; AI, for its part, seeks to synthesise or imitate functions, problem-solving activities and even decision-making traditionally associated with forms of human intelligence and cognition.
These definitional uncertainties, at best, reflect a set of similarities between the types of technologies that have been randomly grouped together under the label "artificial intelligence". This categorical confusion is compounded by the marketing efforts of promoters across various professions who seek to exploit the general euphoria to advance their own direct interests, regardless of whether these align with real, defensible outcomes — that is, whether the technologies they promote actually work.
As a fragmented set of technologies, AI too often fails to live up to the promotional expectations of industry, academia or policymakers, while sinking ever deeper into a critical vortex characterised by an increasingly fraught landscape of concerns around fairness, accountability, and algorithmic explainability and auditability.
Regrettably, much of what has been written, enacted and hoped for in the realm of AI ethical principles has become an exercise in routine compliance rather than a practical guide for addressing the complex ethical considerations faced by real users of AI systems. "Ethics as theory" provides tools for reflecting on questions that, while theoretically interesting, may be practically useless.
Indeed, the volume of AI ethical principle statements has reached such proportions that it has generated a veritable cottage industry of meta-studies on AI ethics. Some commentators have pointed to the insufficiency of lofty ethical principles alone to address the real ethical challenges of AI, and have even identified a growing legitimacy crisis surrounding their proliferation.
Perhaps the most damning critique of the state of AI ethical principles lies in the generalisation of the various frameworks. Morality — and, increasingly, ethics too — encompasses everything and nothing at once. Because what do these terms actually mean? Are they even the same thing? And, finally, is the word "ethics" the right one to use when we are speaking within the bounds of AI?
A touch of philosophy: ethics lives at the edges of morality
Behind every world view, and behind every system of morality and ethics in particular, lies a certain vision of what human nature is. In Western culture, there are two great traditions in understanding philosophy: one metaphysical and one non-metaphysical — also known as anthropological or narrative. The metaphysical tradition is convinced of the existence of universal, firm and certain truths, as Descartes would have it. The anthropological tradition, by contrast, holds the opposite view, and it is precisely this position that Joan-Carles Mèlich draws on in his work Lógica de la crueldad (2014), where he presents his vision of the construction of morality — a perspective we share.
In human life, nothing is absolute, and if anything were, we could only know it in a circumstantial way. We are beings in situation, incapable of escaping context. As finite beings, we can only know within a specific space and time. This is the starting point of our thinking (and doing) at GNOSS, from which we can reflect on morality, ethics and their relationship with AI.
If one starts from the premise that no absolute and universal principles exist — or that we can only reach them situationally — then ethics disappears. The question then arises: how are we to think about ethics? The main issue that concerns us here, the truly important matter we must bear in mind, is that ethics is not the same as morality. Morality is a discourse of categories and, in that sense, of generalisations; ethics, by contrast, must be understood as the imperative that binds individual conscience.
What does it mean to be moral? If there is an answer to this question, it is already framed within a particular moral framework, which it presupposes and requires. Morality is, therefore, that which pre-exists that classification. It is not that morality is intrinsically bad — it is that it is cruel. Its cruelty lies not in what it commands or obliges, but in the simple fact of commanding. Morality prescribes rights and duties on the basis of classification. What falls outside is deemed immoral. Every morality operates according to a logic of cruelty because it operates a logic of classification and ordering.
At the same time, however, and to the extent that human beings are cultural beings, morality is inescapable — because, in the end, all education transmits a moral universe. The danger arises when we find ourselves in a world where morality completely absorbs ethics. In this scenario, where morality presents itself as a closed code that dictates how to proceed in the different situations of our everyday reality, the question must be asked: where is there room for ethics? More to the point — why do we speak of ethics when what we are actually referring to is the realm of morality?
Ethics emerges at the margins of morality's cruel apparatus. "Ethics is not morality. Rather, it is what calls morality into question. Ethics arises at the limits of morality, in its shadowy cracks. Ethics is that dark zone of morality." Ethics is nothing other than care for the other. The other is the central and fundamental question for any ethical relationship, grounded in an infinite and incalculable responsibility.
In morality, the (moral) response is a predetermined one. Morality dictates what one must do before that demand has even arisen. Ethics, by contrast, indicates what to do without knowing in advance what one ought to do. Ethics derives its meaning not from its normativity, but from compassion.
The lost direction of AI ethics: are we asking the right questions?
The similarity between the "trustworthy AI" guidance produced by China's Ministry of Industry and Information Technology and the frameworks published by leading consultancies and think tanks in liberal democracies — despite the profound cultural and moral differences between these societies — exposes a fundamental problem in the ethical approach to AI. When a society that uses AI for mass surveillance and social control can adopt essentially the same ethical stance as the institutions of liberal democracies, it becomes clear that what is actually being debated in the AI space is not ethics, but morality. Ethics, first and foremost, should be concerned with examining and questioning the very moral frameworks that underpin these positions — frameworks that are, at their core, diametrically opposed.
This does not mean, however, that all the questions raised about AI lack foundation. On the contrary, we recognise that topics addressing the risks of algorithmic bias or explainability often genuinely aspire to tackle concerns that must not be minimised or ignored — indeed, they are vitally important and have their place in GNOSS's technological architectures.
We do believe, however, that it is crucial to draw attention to how industry and academia may have fallen prey to a somewhat misguided view, treating these specific ethical preoccupations as if they were the most critical concerns at the heart of AI technology adoption and use. This is particularly problematic when there is not even clarity about what is being discussed when we speak of ethics in the context of AI. As we have noted, ethics is concerned with care for the other — not with a categorical system that defines what is good or right. Rather than focusing on abstract debates about AI ethics, therefore, we should be directing our attention towards concrete, contextual solutions that address the real challenges faced by people and societies.
At GNOSS, we have chosen to redirect these debates towards what we consider the fundamental questions: What distinguishes AI from other technologies in a way that requires specific treatment? Are there more fundamental concerns that must be addressed before AI can be established as a practice or discipline — let alone as a domain of formal ethical treatment? What use is an articulation of abstract ethical principles that offers no meaningful or direct practical translation?
Our approach to AI is grounded, instead, in a broader recognition that our technology does not exist in a vacuum, but is inextricably bound to the context of its application and operational uses.
Is AI delivering on its promised results?
The cases in which AI has fallen short of its goals are numerous and growing. Indeed, many of the successes that are frequently announced turn out, on closer inspection, to be drastically overstated or simply fabricated. We are still in the early stages of enthusiasm for large language model (LLM) tools such as GPT-3, which appear to offer remarkable text generation capabilities. However, a number of concerns and criticisms have emerged swiftly, suggesting that LLMs function more like stochastic parrots — capable of repeating and combining phrases in an apparently coherent way, but without any genuine understanding of meaning and context — than as a true intelligence that comprehends the world it seems to reflect upon.
AI-driven text generation initiatives that are fed vast quantities of written information can give rise to stimulating interactions, but in themselves they fundamentally lack any understanding of the semantics — the meaning and real-world relevance — of the words they string together. This disconnect between syntax and semantics becomes even more problematic in applications where life-or-death decisions depend on specific claims about what is true in the world.
We do recognise, however, that there are sensible paths forward for legitimate and defensible applications of LLMs across a range of settings. This should nonetheless serve as an important reminder that, as with other classes of AI technology, there are limits to their applicability — limits that will be largely determined by the specific contexts and environments in which they are intended to operate.
The need for a semantic approach: the only way to navigate this crossroads is for AI systems to understand the world and be contextual.
Against this backdrop, the need for Semantic AI becomes clear. An AI that is not only capable of generating coherent text, but also understands the deep meaning of the words and concepts it handles, as well as their relationship to the real world. Only by integrating semantics into AI systems will we be able to overcome the limitations of current models and develop an artificial intelligence genuinely capable of understanding and reasoning about the world, thereby avoiding the problems that stem from the disconnect between syntax and semantics.
For an AI system to be genuinely intelligent, it must have a "symbolic core" that allows it to understand the world in human terms. A system can be considered a "cognitive artefact" — an artificial mind capable of relating to and working with people within a common-sense framework — if it satisfies four fundamental conditions. One of these is the ability to unambiguously distinguish entities and their relationships: that is, to recognise the facts of the world, at least within its knowledge domain. Entity recognition and contextual understanding are capabilities that fundamentally distinguish human cognition from purely statistical approaches.
Context provides the interpretive framework that makes it possible to adequately understand the meaning of actions, words or data. Without contextual understanding, it is impossible to resolve ambiguities, grasp intentions, establish relevance, respect temporal dependencies or handle incomplete information. Social, cultural or situational context determines communicative intent. The same words can carry entirely different intentions depending on who says them, to whom, and under what circumstances. What matters and what does not depends on the specific context. The solution to the current problems of AI therefore lies in developing systems that understand the context in which they operate — and this can only be achieved through semantic approaches.
This principle has guided the development of our software architecture based on knowledge graphs and ontologies — essential components for the effective, responsible and auditable use of AI. Knowledge graphs make it possible to model domain semantics, giving AI the ability to understand the context in which it operates. This integration ensures that generated responses always contain only structured, reliable and verifiable information, maintain logical and contextual coherence, and deliver results that are traceable and reproducible by third parties.
Our technology is applied across every context in which it is used. Each of these contexts entails its own set of demands, functional expectations and domain-specific obligations. This approach compels us to place AI in its rightful position: as one tool among many of varying sophistication, inexorably embedded in a world of tangible actions and consequences. AI must understand context — it must be contextual and, therefore, semantic. Only then will it be able to share common sense with humans.
Real solutions to real problems: our approach to AI within the European regulatory framework
The results generated by our AI approach are real, far-reaching and affect our lives. They are not mere academic reflections, but the product of years of work in the field with our clients — striving to understand the complexities of their application domains, addressing the legal, political and genuinely ethical questions that surround their environments, and working to implement AI system solutions that tackle those complexities in a specific and contextual way.
As a society, we stand at a crossroads: AI can immensely amplify our collective capabilities or undermine the foundations of our democratic institutions. The difference between these two scenarios will not depend on the technology itself, but on our ability to govern it effectively. In this sense, it is not enough to deal in theoretical abstractions — what is needed is to find that point of balance where technological innovation, ethical reflection and regulation advance simultaneously. This is precisely where genuine progress happens.
"Smart regulation" sets clear boundaries while leaving room for responsible experimentation, facilitating technological advancement while protecting fundamental human rights and values. In this context, the recent EU AI Act establishes the regulatory framework for the use of artificial intelligence in the European Union. This legislation classifies AI systems into categories according to their potential impact, focusing its efforts primarily on regulating high-risk systems. These systems must maintain clear, documented records of their operation and ensure that their decisions are trustworthy. The strength of the European approach lies in its emphasis on the "explainability" of AI models. For an AI system to comply with the requirements set out in the Act, it must be reliable, traceable and, ultimately, auditable — thereby safeguarding the fundamental rights of all those who use them.
Our approach at GNOSS is perfectly aligned with this vision. By grounding our AI systems in knowledge graphs and ontologies, we ensure that they are explainable, traceable and auditable. This allows us to address the real challenges faced by our clients — not only from a technological perspective, but also taking into account the ethical, legal and social implications of AI in each specific context. Only through this comprehensive, contextualised approach will we be able to harness the full potential of AI while mitigating its risks and ensuring that its development and application are aligned with our fundamental values and principles as a society.