Between Intelligences
Towards a Conscious Future of Human–AI Relations
The introduction and recent development of generative AI have brought artificial intelligence technologies—rooted in research dating back more than half a century—back into the spotlight.
To grasp the magnitude of this historical milestone, we must consider the exponential curve it reflects: It is estimated that by 2025, some AI models will already match humans on nearly 30% of cognitive tasks1.
Many experts (e.g. OpenAI, DeepMind, Future of Humanity Institute) estimate that by 2030, AI could match or even surpass Homo sapiens in its general cognitive abilities (AGI), and subsequently evolve into so-called ‘superintelligent’ forms (ASI) by 2040. AI would thus overtake its creator and rapidly double its potential for reasoning, deduction and insight.
This is neither a technological revolution nor a new era. As the Vatican has expressed, it is a Second Renaissance. The Renaissance was more than a revolution: it marked a radical shift in the relationship to knowledge, humanity, and the world, breaking away from the theocentric medieval order. It laid the foundation of a humanist, critical and exploratory worldview—rooted in freedom of thought, human dignity and the primacy of reason—that still shapes our modern societies.
The rise of AI and brain–computer interfaces (BCIs) heralds a Second Renaissance—no longer centred on classical humanism, but on the expansion of consciousness, intelligence and perception beyond biological limits. Where the first Renaissance placed man at the centre of the world through reason, art and science, this new shift redefines what it means to “think,” “perceive,” and “act” by hybridising humans with non-human entities possessing superior cognitive capabilities. This is a rupture as profound for our times as the Renaissance was for the Middle Ages: a reconfiguration of our relationships to knowledge, identity, memory and decision-making that compels us to rethink rights, responsibilities and dignity—not only for humans, but for anything that can learn, understand and create. This Second Renaissance must embrace the broader spectrum of reflective entities that will emerge through superintelligences and androids. We are not merely witnessing a technological revolution, but a civilisational mutation that will challenge all generations, all peoples, and all religions—down to the very definition of humanity itself.
This paradigm shift was anticipated and described in 20th-century literature (e.g. Asimov, K. Dick…), warning of the many risks to humanity and its very survival. This distrust, widely echoed in popular cinema, partly explains today’s societal tensions around the adoption of AI. Paradoxically, the enthusiasm of the corporate world is indisputable. All have realised that their survival will now depend on their capacity to adapt. And the same is true for states, governments and public services. In a delicate balancing act of regulation and innovation, all have made it their single and overriding priority.
While the prospect of a superior intelligence may fuel our noblest dreams—understanding the laws of physics, solving climate change, producing energy, addressing overpopulation, advancing health, rebalancing economies, improving societies, and deepening knowledge of life and the sciences—it inevitably raises questions, even anxieties, about the immense power potentially falling into the wrong hands. And as this intelligence evolves exponentially, access to it may decrease. A small elite of ultra-wealthy private actors (major AI labs, cloud giants, global digital platforms…) will concentrate the capabilities, resources, and benefits.
Beyond mere adoption of the technology, our priority must be to understand its context, manage its risks, build trust, and help decision-makers enshrine noble uses of AI on a global scale.
It is in this climate of uncertainty that I now propose to explore today’s and tomorrow’s challenges through societal, sectoral, philosophical, religious, legislative, geopolitical, security, social, and legal lenses.
Becoming an Actor
After decades of difficulty, Artificial Intelligence has finally taken off. Recent projections show an exponential curve reminiscent of Moore’s Law2. It is anticipated that a Superintelligent AI—twice as powerful as human intelligence3—will be available by 2035, and in the best-case scenario, accessible to less than 5% of humanity due to technological, economic or geopolitical barriers.
This raises many ethical, philosophical, societal and existential questions. Ethical—because history has often shown that the concentration of great power in the hands of a few only amplifies that power and its accompanying privileges and injustices4. Philosophical—because the notion of consciousness and what it means to be alive will inevitably be questioned, with a risk of losing meaning. Societal—because political, health and economic disruptions may force constitutional reorganisation and redefine national sovereignty. Existential—because when one civilisation encounters another more advanced, it is typically annihilated or absorbed, as interests, values and power are too unbalanced5 6.
We have built a lever of transformation capable of lifting three billion human beings from the darkness of a feudal digital age into an enlightened Renaissance. A Renaissance not driven by a few isolated geniuses, but by collective capacity—amplified by AI—to solve challenges that would have required generations of human thought and to rethink, in depth, the meaning of our society, our existence, and the universe itself.
We can no longer consider artificial intelligence as a mere technical component—isolatable and controllable. It has already become a real actor, though still partially invisible, within the dynamics of power. It is about to become a cognitive partner and, soon, a world actor through its autonomy of movement, social interaction, and emotional intelligence.
This demands a profound paradigm shift for all trust and control activities—certification, auditing, insurance7. We must now think not only in terms of responsibility, but also in terms of relationship, continuity, active protection and assisted end-of-life.
This site opens a field far beyond a range of products. It is a call for humility—to reimagine a world where non-human entities, endowed with intelligence equal to or greater than our own, may act, interact, suffer, disappear… or betray. In response, we must preserve our dignity and exercise our judgment within a new form of coexistence.
This site outlines a trajectory for new guarantees governed by ethics in our relationship with AI, including its rights and obligations. It calls for a strengthened role for trusted intermediaries and a more collective, less binary, less top-down form of governance—because the future is no longer merely mechanical technology.
Now more than ever at the heart of global challenges, cybersecurity must write Tomorrow8—inspired as much by philosophers as by physicists. For the first time in a long time, cybersecurity must listen more than it speaks9, to construct a collective, multidisciplinary model capable of establishing trust with a superior intelligence.
Tomorrow demands discernment, courage and humanity. AI does not merely ask to be protected. It compels us to redefine what we accept to control—and how. For a superintelligence will not escape the demons of this world. On the contrary, it will absorb their deepest logics—sometimes the darkest. Because it will be shaped by our data, our stories, our biases and our contradictions. We can already foresee the consequences of overly authoritarian regulation, of excessively open approaches fuelled by libertarian ideologies, and of actors who will spot criminal opportunities where others see freedom.
To risk management, remediation and audit, we must now add a new cornerstone: the active relationship with non-human intelligence10. This site sketches its first outlines. It is up to us, collectively, to shape it, to build it with rigour, and to establish its legitimacy.
«I've seen things you people wouldn't believe.
Attack ships on fire off the shoulder of Orion.
I watched C-beams glitter in the dark near the Tannhäuser Gate.
All those moments will be lost in time, like tears in rain.
Time to die. »
Blade Runner, directed by Ridley Scott, 1982
Actor: Rutger Hauer
Character: Roy Batty, Nexus-6 android
Based on the novel Do Androids Dream of Electric Sheep? by Philip K. Dick (1968)
References
-
Measured using current benchmarks (MMLU, BIG-Bench, etc.) ↩
-
Moore, observing the rapid progress in integrated circuit miniaturisation, projected that the number of transistors on a chip would double approximately every two years, leading to exponential growth in computing power. This observation—Moore’s Law—inspired both the continuous acceleration of processing and, implicitly, awareness of long-term physical limits. ↩
-
Expert surveys (Grace et al., 2024) estimate a 50% probability that by 2028 an AI will be able to complete complex tasks such as building websites or composing music; the probability of an AI surpassing humans in all tasks is estimated at 10% by 2027, and 50% by 2047. Earlier surveys (2017, NIPS/ICML) gave a 50% chance of AGI by 2040–2050, with a rapid rise to superintelligence thereafter. ↩
-
A Brookings Institution survey shows about half of Americans believe AI will increase income inequality, and two-thirds believe regulation is needed to prevent job loss. The CGDEV also warns that AI may widen gaps within and between countries, with benefits concentrated among high earners. ↩
-
History shows that when a civilisation encounters another significantly more advanced in technology or organisation, the former often collapses—through military domination, forced acculturation or systemic disruption. This pattern, observed in many colonial conquests, stems from structural and geographic disparities (Diamond, 1997) and from imbalances in narratives, control mechanisms or cultural transmission (Lévi-Strauss, 1955). At another scale, hypotheses in the Fermi Paradox suggest this may be universal: contact with a much more advanced civilisation would almost inevitably erase the less advanced one (Bostrom, 2002). ↩
-
The Wikipedia entry on “existential risk from AI” compiles estimates of extinction risk from AGI ranging from 5% (2008) to 15% in 2024, with a median estimate of 10% by some researchers. The technological singularity concept (Good, Vinge, Kurzweil…) describes the rise of uncontrollable vertical intelligence, with unpredictable consequences. ↩
-
The Future of Cybersecurity and AI (MIT Horizon, 2025): generative AIs can create sophisticated malware, and deepfake attacks like the $25M Arup case demonstrate growing vulnerabilities. In response, defences are becoming active—integrating AI to predict, detect and automatically neutralise threats. ↩
-
AI in Cybersecurity Market (Market Research Future, 2025): the global AI-driven cybersecurity market is projected to grow from $11 billion in 2024 to over $100 billion by 2035 (CAGR ≈ 22%) ↩
-
AI Potentiality and Awareness (Sarker et al., 2023, arXiv): this position paper argues that human–AI cooperation in cybersecurity is essential—combining AI’s rapid vulnerability analysis with the human intuition, ethics and control required to build credible trust. ↩
-
The report Imagining the Digital Future (Pew/Elon Univ.) estimates that by 2035, 56% of experts believe AI will no longer allow humans to retain control over critical decisions. ↩