The first time I saw war up close, I was in Marawi, on an assignment to help rehabilitate a city scarred by violence. I remember the day with unnerving clarity: As our helicopter touched down, a bomb detonated with a thunderous roar, shaking the ground beneath us and sending a plume of thick smoke into the sky. My heart pounded as the debris settled, but we were forced to press on, to find ways to rebuild amid the chaos. That moment taught me how fragile life can be — and how easily war can turn everything to dust.
Before this experience, war was a distant concept to me, something I had read about in textbooks or glimpsed in evening news reports. Yet standing in the rubble, enveloped by the acrid stench of gunpowder, I could feel the enormity of conflict resonate in my bones. I realized that words on a page could never capture the raw terror of warfare. That day marked the first time I truly understood what it meant to have my own life threatened in service of a mission. Still, I kept telling myself it was a temporary reality — one I hoped never to relive.
Later, I shifted gears and left my government post to pursue studies in artificial intelligence — an abrupt change, perhaps, but one propelled by my conviction that the world was on the cusp of a different kind of arms race. The fight for AI supremacy, I have come to believe, will eclipse even the dialogues we hold on nuclear arsenals. What I have learned about machine learning, data analytics, and algorithmic governance has convinced me that the potential of AI, both constructive and destructive, surpasses what most people realize. And that realization, in many ways, terrifies me more than any bomb blast ever could.
Unlike the tangible devastation of traditional warfare, AI warfare could unfold in insidious, invisible ways. Cyberattacks can cripple financial systems, disrupt critical infrastructure, and sow chaos across entire nations without a single shot being fired. Autonomous weaponry could, in a worst-case scenario, make decisions to kill without human oversight. Worse still, the race for AI dominance might lead to hasty development and deployment, as nations and corporations scramble to outpace each other. It is this shadowy, unpredictable battleground, where lines are blurred and ethical boundaries often ignored, that keeps me awake at night.
But what truly heightens my fear is the prospect of AI being monopolized by a select few entities or nations. A lopsided distribution of AI prowess could exacerbate existing inequalities, turning technology into a tool for oppression rather than liberation. Imagine an era where surveillance, data manipulation, and autonomous control are concentrated in the hands of those who prioritize profit or power over human dignity. It’s not just a question of who owns the tech, but how it is wielded — and whether the global community can establish guardrails before these capabilities spiral beyond our control.
All of this underscores the urgency of establishing AI sovereignty—a principle that encourages each nation to cultivate its own AI capabilities responsibly, with accountability to its citizens. Just as we have treaties and conventions for nuclear weapons, we must develop frameworks that govern AI’s deployment, ensuring transparency, fairness, and ethical use. The new global order will be shaped by which countries invest not only in research but also in robust regulation. In the end, my greatest fear is not that AI will obliterate humanity in a cataclysmic event, but that we will unwittingly surrender our future to forces we neither understand nor control.
In a world on the brink, vigilance, cooperation, and ethical innovation must prevail.