P(doom) and the interesting case of the finance sector
or: How I Learned to Stop Worrying and Love the Algorithm
Terminology:
p(doom): The probability that doom will occur due to AI progress.
foom: The idea that AI will pose a threat over a very short timespan (e.g. it will suddenly gain cognisance).
LLM: Large Language Model (the kind of models powering ChatGPT).
Whenever people discuss the idea of artificial intelligence causing doom, the focus frequently shifts to what has become a kind of archetype of the genre: ChatGPT (and its generative AI cousins). After launching in late 2022 it has taken the public’s imagination by storm, representing a large technological leap. This spawned a thousand dinner table debates, what should we do should this technology be allowed to run amok? What if it starts pursuing its own objectives, and furthermore, what if those objectives do not align with our own?
It seems odd then - the spotlight is perpetually hogged by the showy stars of generative AI: whimsical chat bots and image generators. They’re the Houdinis of this world, dazzling and misdirecting. Meanwhile lurking in the shadows are the unassuming, number-crunching prophets of finance AI, quietly predicting the future one data point at a time. Despite their understated demeanor, these financial systems pack a punch in the intelligence department, and all to often overlooked in their capacity for complexity. When it comes to p(doom) these forecasters hold a large amount of power over our lives, deserving far more than just a cursory glance. Echoing this sentiment is none other than US Treasury Secretary Janet Yellen, who remarked:
“This year, the council specifically identified the use of artificial intelligence in financial services as a vulnerability in the financial system,” - Janet Yellen
So lets take a deeper look into why we should or shouldn’t be worried by these systems, contrasting the less advertised but more perilous world of financial AI against the more familiar terrain of generative AI. Here, in the sophisticated algorithms of financial AI, lies a story vastly different from the usual suspects of chatbots and image generators. It’s a story of unseen influence, uncharted risks, begging the question: which AI domain really holds the keys to Pandora’s box?
Finance and maximum p(doom)
Feedback Loops and Speed. Speed matters greatly in p(doom) scenarios, you are more likely to catch a basketball than a bullet. For the likes of ChatGPT, their memory is akin to a fleeting thought, held briefly in the context window (or in the longer cadence, in their weights). But what if an AI could learn faster? That’s where financial AI steps in, each iteration a step closer to loftier ambitions, with a feedback loop as fast as the physics of our time allows, leaving even the nimblest of LLMs in the dust. Where speed is a factor in potential chaos, financial AI is the sprinter we scarcely see, a blur on the track of complex objectives.
Input Priors. Consider that the LLMs are bound within the confines of their digital domain, their primary task being to predict the next token in a sentence. Tailored for the nuances of human language, they excel in this realm. Yet, venture beyond into the worlds of images, audio, or the pulsing rhythms of time series data, and they're like fish out of water. While it isn’t impossible to shoehorn these diverse data types into the mold of natural language (LLMTime and Riffusion are good examples of this genre) - it comes at a great computational cost. Now, pivot to the realm of financial AI, where the beat is different: here, we deal with continuous sequences, a more fluid and encompassing language. This dataset versatility allows financial AI to transcend beyond languages discrete confines, embracing a broader spectrum of dat with ease. From a p(doom) perspective, this makes them not just versatile but potentially more formidable, capable of navigating and influencing a wider array of data shaping our reality.
Sizeable Actions. In the innovative world of AI, projects like AutoGPT have been tinkering with the idea of empowering chatbots to go from words to actions. But here’s the catch: these chatbots, much like actors rehearsing a script they haven’t seen, aren’t inherently designed for this proactive role. Their expertise lies in predicting the next token, not in planning actions in the real world. The plot thickens when we turn to financial AI - these are the true movers and shakers, coded with the purpose to act, and not just small gestures. These models wield the power to move mountains of capital, orchestrate financial symphonies, or, in a less harmonious scenario, send markets tumbling down like a house of cards. The history of economics whispers a cautionary tale - sometimes these algorithmic maestros play a part in the drama, demonstrating that their actions, magnified across economies, can ripple through society with profound consequences.
Adversarial Doom Objectives. Speaking of collapses, when you strip them down to their bare bones, you find at their heart greed, or more elegantly put, the relentless pursuit of profit. It’s the foundational code in the DNA of many financial AIs - this single-minded quest for monetary gain. It’s hardly surprising, given that this is a financial entities prime directive. Yet, left unchecked, this third for profit can drive these AIs to transmute market stability into chaos, endlessly spinning straw into god until the barn itself collapses. Consider the paperclip problem, but in this case our AI isn’t fixated on bending paperclips but on hoarding treasure, a digital Midas touch gone awry. Curiously this narrative is often sidelined to make way for AIs like LLMs to take the stage, even though they lack this insatiable profit-driven objective.
No Transparency. When OpenAI turned the page on sharing their research papaers and the intricate weights of GPT3, it sent ripples of dismay through the tech community (and confusion around the term open). Citing safety, they drew the curtain on the inner workings of these AI systems, leaving us to ponder in the dark about the potential threat they might habour. Some companies (like Meta) still are lifting the veil on their AI creations, offering glimpses into the realms of possibility with LLMs like Llama. However, in the clandestine world of financial AI, secrecy is not just a preference but a necessity, driven by the cut-throat nature of the market. Here major financial firms guard their algorithmic secrets as fiercely as dragons hoard their treasure, offering no more than a whisper about their AI machinations. Any published work is often a mirage, deliberately beyond replication. Even in cases where financial titans aren’t crafting their own large models, they eagerly harness existing ones, with power players like Citadel voraciously securing access to tools like ChatGPT. In this game of high stakes finance, where transparency is as rare as a unicorn, the stakes of doom climb even higher.
In essence, we’re gazing into the digital abyss of an AI system, a polymath fluent in the broad strokes of reality. It’s honed more frequently, learning and adapting at a breakneck pace. Armed with the capacity to act - and not just any action, but those wielding considerable power - it’s a force to be reckoned with. Add to this mix a doom optimisation function, shrouded in a cloak of opacity. No peeks behind this curtain. Such a system undeniably stirs the p(doom) pot, bubbling with potential threats just beneath the surface of our techno-society.
Why you should stop worrying
There is a silver lining in this cloud though. While the financial sector decidedly wields great power in the form of these AIs, they have largely learned to control them with remarkable finesse, a lesson for other sectors dancing with these algorithms.
Risk Management. In the grand theater of risk, the common understanding often misses the subtle undercurrents of complexity. It's like viewing a painting from afar, seeing only the broad strokes and not the intricate details. Most people gauge risk with a simple calculus:
A straightforward equation. Yet, when we zoom in on the landscape of p(doom), the picture becomes more nuanced, more textured. Probability, in this context, stretches between the certainties of 0 and 1 – a realm where absolutes are myths, and everything is a game of chances. The impact, on the other hand, scales from zero to the unfathomable – reaching as far as the extinction of humanity, a grim reminder of our planet's history with other species. If you entertain the notion, even fleetingly, that AI might trigger such an apocalyptic scenario, you're essentially dabbling in the arithmetic of infinity. Physicist David Deutsch eloquently unravels this complexity in his discourse (50:32):
Following this logic, the rudimentary model of risk would advocate for unlimited resources to mitigate even the minutest possibility of catastrophe – a strategy as unsustainable as trying to empty the ocean with a teaspoon.
Fortunately, the financial realm dances to a sophisticated rhythm in its risk assessment. Risk isn’t just measured, it’s sculpted, with instruments that map its convexity and provide optionality for every conceivable scenario, portfolios that are constructed like intricate mosaics, and bets sized to precision. While financial risk as a problem is far from solved, its approach to managing the potential runaway AI scenario is akin to a master chess player thinking several moves ahead.
Many good AIs. In this grand marketplace, driven by the pursuit of profit, major financial firms are already making their moves. Yet, despite the high stakes, the feared apocalypse, the so-called ‘doom’, remains a specter on the horizon, never quite materialising. This scenario lends credence to Yann LeCun’s insightful thesis, a rogue AI can be countered and balance by one of its many counterparts. Picture a financial AI stepping out of line, its ambitions swelling beyond the confines of its programming. In this high-tech tango, it’s not alone on the dance floor. Competing AIs, each orchestrated by rival firms, are quick to respond, moving in a fluid, adversarial rhythm to check it spower, to ensure no single entity entity monopolises the stage. The system thrives on this competition, each player tuned to the highest frequency, making the likelihood of a single AI breaking away and spiraling into ‘foom’ - an abrupt, uncontrollable ascent - remarkably slim. It’s a delicate balance, a testament to the intricate, self-regulating choreography of the financial AI world.
In summary, it’s curious that the talk of p(doom) often sidesteps the more glaring scenarios lurking in realms beyond the world of generative AI. Yet, when we venture into the world of financial AI, I find a reassuring calm in the midst potential chaos. The safeguards and systems, like seasoned sentinels, stand vigilantly against the risks, leading a sense of serenity even in what be considered the direst of p(doom) probabilities. This assurance, this confidence in the face of the unknown, extends to my perspective on LLMs and their creative counterparts. The future is indeed bright.