In the forthcoming book, Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future, author Hamilton Mann sees the main challenge of AI as ensuring that systems exhibit integrity-led capabilities over the pursuit of mere general or superintelligence.
Mann, a digital transformation executive at Thales, a major French aerospace and defense company, and MIT mentor, shares big ideas from his book, which is being published by Wiley.
In the rapidly evolving world of artificial intelligence, computational power isn’t enough.
Warren Buffett once said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.”
This wisdom is equally applicable to AI. As we begin to ‘hire’ powerful intelligent machines to perform tasks traditionally done by humans, we must ensure they possess something akin to what we call integrity.
AI systems need to be designed to uphold integrity behind closed doors so that their functioning, beyond exhibiting trustworthiness, can adhere to societal needs, norms, values, and standards without infringing, harming, devaluing, or regressing their integrity.
Artificial integrity over intelligence represents the new AI frontier and a critical path to shaping the course of human history in creating a better future for all.
It is a built-in capability within AI systems that ensures it functions not just efficiently, but also with integrity, respecting human values from the very start.
It is defined by three interconnected models that collectively ensure the preservation and enhancement of the integrity of the ecosystem in which AI takes part.
- Society Value Model, which establishes and enforces strong guardrails and value principles external to AI systems that support the human condition, serving as the external force that creates a safe environment for humans to thrive and within which AI should operate with integrity.Â
- AI Model, which ensures AI operational consistency with internal and intrinsic guardrails, guidelines, and values-driven standards from an AI development standpoint, thus ensuring that algorithms uphold not only a form of intelligence but also a form of integrity over time.Â
- Human and Artificial Co-intelligence Model as a central concern and commitment combined with these external and internal models, which set the foundation to build and sustain capabilities based on the synergistic relationship between humans and AI while enhancing rather than undermining the human condition.
Together, these three models should constitute one integrated approach, functioning as a metamodel that is essential to ensure that the participation of such artificially intelligent systems — which thus become stakeholders in a larger whole that constitutes the living ecosystem in which their intelligence is exercised — can be carried out while preserving, supporting, and defending the integrity of the societal ecosystem.
Without the capability to exhibit a form of integrity, AI would become a force whose evolution is inversely proportional to its necessary control — not just through human agency, but also with regard to human values.
Artificial integrity is a forward-looking ‘Digital for Good’ approach through which AI development can be sustained in society, with integrity.
Artificial integrity to sustaining Society Value Models
External to AI models themselves, the concept of artificial integrity embodies a human commitment to establish guardrails and value principles that serve as a code of life for AI to be developed while having stances that guide the way to sustain a sense of integrity in its creation and deployment.
It refers to the society value model that the AI models need to adhere to, considering a set of principles that structure the delivery of its functioning to be intrinsically capable of prioritizing and safeguarding human life and well-being in all aspects of its operation.
It represents the value system in which these forms of intelligence operate and serve, upholding value principles tailored to specific contexts and enabling AI’s outputs and outcomes to resonate with and sustain in reflection of these values, not just to the benefit of one given user, group, or community but to the greater superior interest of a given socio-economic, political, and environmental ecosystem’s integrity.
This approach highlights a paradigm shift for AI systems, which should not just exhibit intelligence for its own sake and for the hyper-narrow interests of an individual in the limited framework of a commercial purpose.
They should also be algorithmically socially responsible and societally accountable in terms of the impact of their artificially made intelligence on society, considering the value system in which they are an artificial stakeholder.
The characterization of a society’s value model involves multiple facets that collectively define the ethical, cultural, and operational principles guiding behavior and decision-making within that society. There are the prevailing moral beliefs and principles that dictate what is considered right or wrong in a society. There are the cultural values that forge the shared assumptions, traditions, and practices.
The legal frameworks are another essential dimension. How a society organizes its economic activities, the distribution of resources, and how wealth is generated and distributed are also part of the society value model, and more.
These multidimensional aspects that compose any value model lead to the necessity of a diversity of approaches and perspectives to analyze it. As such, there is a need for multidisciplinary inputs. It includes engaging with various stakeholders, such as ethicists, community leaders, potential users, and others, to understand their perspectives and concerns about AI deployment and to inform strategic decision-making as far as a given society value model is concerned.
Developing a mathematical model of the value model is crucial for organizations, regions, countries, and even globally to quantitatively understand and predict the impacts of policies, technologies, and changes within societal frameworks. Such a model could enable stakeholders to assess and optimize the interaction between technological advancements — particularly in artificial intelligence — and societal values, ensuring that these technologies are deployed in a way that aligns with ethical norms and enhances public welfare.
A mathematical model of the society value model could act as a strategic asset across various levels of decision-making, providing a structured, evidence-based approach to navigating the complex interplay between societal values and AI.
Artificial integrity is a deliberate act of design
Core to AI models themselves, the concept of artificial integrity implies that AI is developed and operates in a manner that is not only aligned with guardrails and value principles that serve as a code of life for AI but also does so consistently and continuously across various situations, without deviation from its programmed values-driven guidelines.
This is a fundamental part of the way it has been conceived, trained, and maintained. It is a deliberate act of design. It suggests a level of algorithmic self-regulation and intrinsic adherence to values codes of conduct, similar to how a person with integrity would act morally regardless of external pressures or temptations, maintaining a vigilant stance toward risk and harm, ready to override programmed objectives if they conflict with the primacy of human safety. It involves a proactive and preemptive approach, where the AI model is not only reactive to ethical dilemmas as they arise but is also equipped with the foresight to prevent them.
As thought-provoking as it may sound, it is about embedding artificial artifacts into AI that will govern any of its decisions and processes, mimicking a form of consciously made actions while ensuring they are always aligned with human values. This is akin to a ‘value fail-safe’ that operates under the overarching imperative that no action or decision by the AI system should compromise human health, security, or rights.
An essential element in building such an AI model lies in the data process.
Beyond labeling, which generally refers to the process of identifying and assigning a predefined category to a piece of data, it is necessary to adopt the practice of annotating datasets in a systematic manner. While labeling data gives it a form of identification so that the system can recognize it, annotating allows for the addition of more detailed and extensive information than simple labeling. Data annotation gives the data a form of abstract meaning so that the system can somehow contextualize the information.
Including annotations that characterize an integrity code, reflecting values, integral judgments regarding these values, principles underlying them, or outcomes to be considered inappropriate relative to a given value model, is a promising approach to train AI not only to be intelligent but also capable of producing results guided by integrity to a given value model.
For example, in a dataset used to train an AI customer service chatbot, annotations could include evaluations on integrity with respect to the value model referenced, ensuring that the chatbot’s responses will be based on politeness, respect, and fairness.
Human-AI Co-intelligence revisits what we think we knew about Collaborative Intelligence
This conscientious perspective of artificial integrity is especially pertinent when considering the impact of artificial intelligence (AI) on society, where the balance between ‘human intelligence value added’ and ‘AI value added’ is one of the most delicate and consequential.
In navigating this complexity, we must first ensure to delineate the current landscape where human wit not only intersects with the prowess of AI but also serves as a compass guiding us toward future terrains where the symbiosis of human and machine will redefine worth, work, and wisdom.
This balance for artificial integrity could be achieved through the perspective of AI inclusion to society considering four different modes.
- The Marginal Mode: When it comes to value creation, there exists a quadrant where the contributions of both human and artificial intelligence are notably restrained, reflecting scenarios of limited impact. This segment captures tasks characterized by their minimal marginal benefits when subjected to either human or artificial intelligence inputs. Such tasks are often too inconsequential to necessitate significant intellectual investment, yet simultaneously too intricate for the present capabilities of AI, rendering them economically unjustifiable for human endeavor.
An example is document scanning for archival purposes – a task that, while manageable by humans, succumbs to monotony and error, and where AI, despite capabilities like optical character recognition (OCR), offers only marginal improvement due to challenges with non-standard inputs. - The AI-First Mode: In this paradigm, AI is the linchpin, spearheading core operational functionalities. It spotlights scenarios where AI’s unparalleled strengths – its ability to rapidly process extensive datasets and deliver scalable solutions – stand out. This AI-centric approach is particularly relevant in contexts where the speed and precision of AI significantly surpass human capabilities. AI emerges as the driving force in operational efficiency, revolutionizing processes that gain from its superior analytical and autonomous capabilities.
An example is observed in the financial industry, particularly in high-frequency trading. Here, AI-driven trading systems leverage complex algorithms and massive datasets to identify patterns and execute trades with a velocity and scale unachievable by human traders. It showcases the transformative potential of AI for redefining operational capabilities. - The Human-First Mode: In this segment, the spotlight shines brightly on the indispensable qualities of human cognition, including intuitive expertise, contextual, situational, emotional and moral discernment. AI is deployed in a supportive or complementary capacity. This approach champions human capabilities and decision-making, particularly in realms necessitating emotional intelligence, nuanced problem-solving, and moral judgment. It emphasizes the irreplaceable depth of human insight, creativity and interpersonal abilities in contexts where the intricacies of human thought and emotional depth are critical.
For instance, in psychiatry, the nuanced interpretation of non-verbal communication, the provision of emotional support, and the application of seasoned judgment underscore the limitations of AI in replicating the complex empathetic and moral considerations inherent to human interaction. This perspective is bolstered by empirical evidence, reinforcing the critical importance of the human element across various landscapes. - The Fusion Mode: This segment illustrates a synergistic integration where human intelligence and AI coalesce to leverage their distinct strengths: human creativity and integrity traits paired with AI’s analytical acumen and pattern recognition capabilities. Research across various domains validates this synergy. In health, for example, AI can augment physicians’ capabilities with precise diagnostic suggestions, and enhance surgical precision in medical procedures. In engineering and design, it can support creative problem-solving.
These four modes, inherently linked to each other and representing transitional modes from one to the other, are constitutive of the participation that intelligences, whether human or artificial, have and can take in the system that is our society.
Navigating the transitions
Altogether, the four modes—Marginal, AI-First, Human-First, and Fusion—underscore a future of work in which artificial intelligence augments human expertise fostering a collaborative paradigm where the complex, creative, and empathetic capacities of humans are complemented by the efficient, consistent, and high-volume processing capabilities of AI.
As we migrate from one quadrant to another, we should aim to bolster, not erode, the distinctive strengths brought forth by humans and AI alike. While traditional AI ethics frameworks might not fully address the need for dynamic and adaptable governance frameworks that can keep pace with the transitions in balancing human intelligence and AI evolution, artificial integrity suggests a more flexible approach to govern such journeys.
This approach is tailored to respond to the wide diversity of developments and challenges brought by the symbiotic trade-offs between human and AI, offering a more agile and responsive governance structure that can quickly adapt to new technological advancements and societal needs, ensuring that AI evolution is both ethically grounded and harmoniously integrated with human values and capabilities.
When a job evolves from a quadrant of minimal human and AI value to one where both are instrumental, such a shift should be marked by a thorough contemplation of its repercussions, a quest for equilibrium, and an adherence to universal human values. For instance, a move away from a quadrant characterized by AI dominance with minimal human contribution should not spell a retreat from technology but a recalibration of the symbiosis between humans and AI.
Here, artificial integrity calls for an evaluation of AI’s role beyond operational efficiency and considers its capacity to complement — rather than replace — the complex expertise that embodies professional distinction. Conversely, when we consider a transition toward less engagement from both humans and AI, artificial integrity challenges us to consider the strategic implications carefully. It urges us to contemplate the importance of human oversight in mitigating ethical blind spots that AI alone may overlook. It advocates for ensuring that this shift does not signify a regression but a strategic realignment toward greater value and ethical integrity.
AI systems capable of coping with such transitions are not just Artificial Intelligence systems but will be equipped with Artificial Integrity.
The difference between artificially intelligent-led and integrity-led machines is simple: The former are designed because we could; the latter because we should. This distinction underscores the growing need for Artificial Integrity.
Artificial Integrity is the new AI.