The political implications of AI

By | February 21, 2023

Releasing OpenAI’s chat bot on the world is the first salvo in an arms race, and both companies and governments are ready for it. Are we?

My experience being gaslit by OpenAI’s GPT is no longer an outlier, and the high strangeness of OpenAI’s AI has become a theme of coverage after Microsoft released its new toy on a fascinated world. There is absolutely no justification for rolling out the AI at this point except a commercial one. I suspect we’ll look back at this point with some deep buyer’s remorse.

Indeed, the more thoughtful commentators have talked about this being a key moment — but what? What, exactly, just happened? What is its significance? In a nutshell, we’ve allowed technology into our lives in a way we’ve never done before. Conservatively speaking, this is on a par with the invention of the web, the iPhone, Facebook etc. But probably, ultimately, with a much deeper impact.

“What happens if Jerry gets mad?” - Dustin Hoffman, Sphere (1998)
“What happens if Jerry gets mad?” – Dustin Hoffman, Sphere (1998)

We are witnessing the decoupling of artificial intelligence from its confines of a specific purpose. AI has thus far been used to do specific things — narrow tasks, such as facial recognition, a robot manufacturing a widget, driving a car. With these we knew what we want it to do, and so we tweaked the AI to a level we were happy (or felt safe) with.

We’re now throwing out all the talk of ‘responsible AI’ and saying, “here’s our product, see how you get on with it. We know it can do A, but it might also do B really well, and might even do C.” We’re happy with A, because that’s the goal — search, in the case of Microsoft — but B is our extra element — a chatty interface. And then there’s C. What is C? It’s the other stuff that GPT does, the secret sauce. The problem is that Microsoft (and OpenAI) don’t know what it is. Microsoft hopes it’s a layer of serendipity, effectively making A and B even better, where Bing finds you something that a normal search engine might not.

Your money is now in a pangolin

Great. Except, of course, that C is also a bug, not just a feature. It may not make A and B better. It might make them worse. Or cause a problem outside the realm of A, B and C. C is a bug that is unknowable, because what we call AI is a black box — a neural network that behave in ways largely beyond the grasp of its programmers, and which cannot be tamed without damaging the end product. When the goal was to recognise faces it’s clear when that has been achieved — at least to the point where it’s good enough to ship. But when you’re shipping an AI whose core sales value are its quirks — its creative responses, its ‘character’ — then you’re entering a strange new world. This world is where a company is, in essence, offering a product whose unpredictability is part of its appeal, its competitive advantage.

It would be a bit like selling a car, which seems to work better than other cars because it takes bends better, or accelerates better, but that it might also occasionally, and unpredictably, drive off a cliff. Or a robo-investment advisor that makes its customers consistently better returns, but can without warning give all your money to Save the Pangolins.

"Dave, I don't know how else to put this, but it just happens to be an unalterable fact that I am incapable of being wrong." Hal 9000 - Space Odyssey, 1968
“Dave, I don’t know how else to put this, but it just happens to be an unalterable fact that I am incapable of being wrong.” Hal 9000 – Space Odyssey, 1968

In fact, I would argue with OpenAI’s GPT it’s actually worse. Because of our innate compulsion to bond, we are vulnerable to anything that can communicate with us in a way that seems almost human. (I’ve talked about this before here.) My flippant examples of cars and robo-advisors above are not particularly helpful, because text-generating AI is the product, not the byproduct. By engaging it, we have already acceded to allowing it a degree of influence over us. We may have only signed up for a more glamorous search engine but we’ve actually admitted into our world something that even its creators aren’t sure about.

This is what is so troublesome. It’s not that generative AI — surprise, surprise — generates responses that are unpredictable, and stray from the dutiful subservience we’ve come to expect from Siri and Alexa. It’s that the custodians of that AI think it’s socially, morally, philosophically and commercially acceptable to turn it into a product.

Unwittingly or wittingly, Microsoft has crossed a significant bridge. It has, in the name of Mammon, made available to all — well, eventually — a human-like interface that can be abusive, manipulative, cocky, without any clear safeguards or health warnings. Frustratingly, the AI community has not, as far as I can see, raised much of a stink about it.

And, perhaps most frustrating is that we — the world, including the AI community — don’t seem to have spent any time anticipating this moment, let alone trying to predict what may lie after it, and to agree on some ground-rules and boundary markers for what is acceptable.

Quibbling on the road to sentience

I believe our obsession with distinguishing between AI and artificial general intelligence, or AGI, has made us blind to the notion that it’s quite possible to have a version of AI that appears sentient enough to be considered to have human abilities of perceiving, feeling, reasoning, understanding and learning. In short, there are several milestones between AI and AGI where AI has advanced to the point where it appears to a human as if it can do some of all of those things.

I think we’re at that point and that it’s foolish to quibble over whether this is somehow sentient AI. If a user interacts with an AI in a sufficiently human way, allowing the AI to shape, or unshape, the user’s knowledge, opinions, beliefs, relationships etc, then I think that’s at least good enough to trigger a protocol or two before we go any further. Unfortunately I don’t see any discussion of both the milestone itself, and of what those protocols might be.

“I don’t know of a hidden room, Sam” - Gerty, Moon (2009)
“I don’t know of a hidden room, Sam” – Gerty, Moon (2009)

This is a mistake, for lots of reasons. To me the most obvious would be: What happens if this technology, this capability, could be harnessed by a powerful entity? It would naive to think this kind of technology is not of interest to state actors, and to some non-state actors — in a word, to weaponise it.

But how? I suppose the most obvious way would be to simply load the AI with certain biases which could then be absorbed into the wider population — play down side-effects of vaccines, say, or gently mock those searching for evidence of UFOs. A simple search engine could do this, arguably, but a chat-based one engenders a more complex, less transactional relationship with the user, and therefore leaves the latter more susceptible. Changing minds and behaviours takes time. Indeed, the process could be more subtle: ‘nudges’ towards different behaviour, such as less jay-walking or getting your ‘flu jabs.

It could be argued that these are commercial products and so the company owning them would not endanger their reputation by allowing them to be tweaked by a government. That may be true in some cases, but Microsoft has, like many big tech companies, a close relationship with the U.S. Department of Defense, and isn’t shy about it. (Declaration of interest: Microsoft has been a client of my consulting company in the past, but not in any field related to this, and none of the information or opinion provided here is based on that work).

Last year Microsoft’s board rejected several proposals by shareholders calling for an independent assessment of the company’s work with the DOD, including a possible $10 billion contract to “assist with development of AI capabilities to operationalize warfare.” In response Microsoft said it “was committed to working with the US military as part of its 40-year long relationship with the Department of Defense.” It also said “we depend on the military to defend our country, and we want it to have access to the best technology the country has to defend it, including from Microsoft.”

Microsoft is no different to other tech firms, it has to be said. A few days after rejecting a clutch of shareholder appeals it won, with Google, Amazon and Oracle, a multiple-award contract “that allows the department to acquire commercial cloud capabilities and services directly from commercial cloud service providers.” The contract runs through 2028 and is worth up to $9 billion.

Hands up. We’re here to do a survey

How is this going to play out? I don’t think we’ll ever really know. When technologies touch a point where governments start to get seriously interested, the more ground-breaking innovations tend to disappear from view. More visible are likely to be efforts by governments who don’t shy from trhe optics of social control: India, for example, is building a bot using ChatGPT to answer citizens’ questions about welfare schemes. Microsoft is cheering them on. (This is the same government that launched a raid, sorry, ‘survey’, on the BBC’s offices after it broadcast a documentary of PM Narendra Modi.)

Long before then, though, I think we’ll start to see evidence of the human cost. Replika, the AI companion I mentioned in an earlier column, has had to drop the steamier side of its repertoire to comply with Italian regulations, leaving users ‘despondent’ — or moving to other options, such as Chai. It’s not hard to feel concern that vulnerable individuals easing loneliness by chatting with AI bots finding their access suddenly curtailed.

But my main concern here is not what I think will happen, but how little thought appears to be given to considering the ramifications of accelerating deployment and commercial exploitation. And I’d argue these actions ignore or undermine existing bromides about ‘responsible AI’.

Microsoft talks a good game:

Together with OpenAI, we’ve also been intentional in implementing safeguards to defend against harmful content. Our teams are working to address issues such as misinformation and disinformation, content blocking, data safety and preventing the promotion of harmful or discriminatory content in line with our AI principles.

No rules, no tools

Its literature on Responsible AI includes areas such as ‘sensitive uses’ and in assessing whether an AI is responsible cites mentions triggers such as ‘risk of physical or psychological injury’:

The use or misuse of the Al system could result in significant physical or psychological injury to an individual.

And Microsoft does seem to be aware of the general nature of what it’s dealing with when it says that the motivation behind drawing up guidelines was

because AI is fundamentally changing how people interact with computing systems, and practitioners were asking for guidance, saying, “[This is] the most ambiguous space I’ve ever worked in, in my years of working in design … There aren’t any real rules and we don’t have a lot of tools.”

Nevertheless, the guidelines themselves (PDF) seem to have been little considered when it comes around to combining Bing with OpenAI’s GPT. The first guideline, for example, is to “make clear what the system can do”, which seems to have been broken from the outset. (Microsoft has now limited the number of questions that can be asked on in one session reduce the likelihood of going down a rabbithole. But that’s not the same as ‘making clear’ what the system can do.

Another guideline is to

match relevant social norms. Ensure the experience is delivered in a way that users would expect, given their social and cultural context.

It’s hard to argue that has been scrupulously observed. As with this:

Make clear why the system did what it did. Enable the user to access an explanation of why the AI system behaved as it did.

I could go on. While I don’t think Microsoft has followed its own guidelines based on the above, it’s fairly clear that this was not an error, but a deliberate policy when the product was released. Here’s the Bing preview experience guide, according to Paul DelSignore:

We have developed a safety system that is designed to mitigate failures and avoid misuse with things like content filtering, operational monitoring and abuse detection, and other safeguards. The waitlist process is also a part of our approach to responsible AI… Responsible AI is a journey, and we’ll continually improve our systems along the way.

Baked, not bolted on

In other words, Responsible AI is not a baseline to work from, but a ‘journey’ that will hopefully get better based on experience. But this seems to contradict what Microsoft’s ‘chief responsible AI officer’ Natasha Crampton said in a statement published on February 17:

We ensure that responsible A.I. considerations are addressed at the earliest stages of system design and then throughout the whole life cycle, so that the appropriate controls and mitigations are baked into the system being built, not bolted on at the end.

That doesn’t seem to have happened. Indeed, Microsoft is clearly walking back as far as possible what it has unleashed, presenting it as merely a preview, and is relying on customer feedback even as it seeks to commercialise the product by adding ads (according to a piece by Reuters). Here’s a Microsoft spokesman quoted by Fortune:

It’s important to note that last week we announced a preview of this new experience. We’re expecting that the system may make mistakes during this preview period, and user feedback is critical to help identify where things aren’t working well so we can learn and help the models get better.

To be clear, I’m not trying to single out Microsoft here. One company was bound to try to gain an early advantage by deploying something like this. OpenAI perhaps forced the issue for Microsoft by releasing ChatGPT.

But there’s no way of getting round the reality: by releasing products, Open AI, and now Microsoft, have begun an arms race. It’s a strange race, in that it’s not just a commercial one, but also a nation-state one. For one thing it’s not going to be cheap, requiring some key resources: one is a large body of data sets to work from, so English LLM is always going to have an advantage because more than 25% of users navigate and communicate in English, while Chinese account for under 20%. The other element are chips: China (and Russia, and Iran) have limited access now to chips from companies like Nvidia. This is not just a battle for the best algorithm. It’s a battle over scarce resources.

How intimately governments get involved in this may only gradually become clear, if at all. But a couple of things are already clear: some governments have decided not to wait before deploying this software, and companies — some of the largest in the world, with whom our lives are already intimately entwined — have already made clear they’re game for that.

One thought on “The political implications of AI

  1. Pingback: Search and Rescue | (Re)Structuring Journalism

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.