Behind AI’s latest bout of chaos

By | November 20, 2023

The last few days of chaos at OpenAI have illustrated some deeper chasms within the tech world about the future (and ethics) of artificial intelligence. That it seems to have ended with Sam Altman and his crew out of the company is only a resolution of OpenAI’s chaos, not those in the industry itself.

Behind it lie the problems I have highlighted in previous columns (here, here and here. My blog is here, Substack here.). The most recent points out that the big announcement at Sam Altman’s big shindig before the blow-up, a roll-your-own ChatGPT tool, was deeply and dangerously flawed.

In a nutshell, what is happening is this: (ex) CEO Sam Altman and his cohort, despite some public hand-wringing about the need for government supervision of artificial intelligence as it (supposedly) nears a general human-like intelligence), wants to move quickly and break things, running out ChatGPT tools and gizmos that give it a strong commercial footing in the race to dominate this AI summer.

His opponents, led by another co-founder Ilya Sutskever, who used his position as director of the board to push out Altman, apparently feeling that he was moving too fast and operating without the say-so of the board. The Economist summarizes the situation well, outlining the dividing line between “boomers” — who want to accelerate the development and rollout of AI — and “doomers”, who fear that might unleash an AI that threatens us all.

The Economist also points to the commercial underpinning at work — those calling for acceleration tend to be somewhat further behind OpenAI in the race, while doomers are further along. This suggests two things: a ‘freezing in place” of AI development would leave OpenAI with a massive advantage. It also suggests that the further along you are, the more scary AI seems to be. And at the centre of it all, Altman personifies the schizophrenia of the industry: he talks of guardrails but clearly wants to move quickly. The promised land beckons.

Fast and spurious

How this plays out I’ll leave to the reporters. The weekend has been nuts, and it doesn’t look like anything is settled as I write. The most likely outcome, however, is we get more players in the field, more investment, and an imperative for greater speed.

This doesn’t mean things will get better. I signed up for the pro version of new AI product, perplexity.ai, which promises a better search engine, learning in the conversation about what you’re looking for and doing a better job of finding it. It’s sleek, and fast. But it’s not what it thinks it is. In the midst of a conversation about AI’s ‘black box’ problem, where it clearly grasped what was meant by the term in that context and provided useful answers, I asked it to “provide some concrete cases where the black box problem was the cause of a serious accident with physical effects”. It offered me this:

The black box problem has been a significant factor in various accidents, particularly in the context of truck and car accidents. Black box data, which records information about a vehicle’s status and operation, has played a crucial role in understanding the circumstances surrounding accidents and assigning liability. In truck accidents, black box data has been used by law enforcement, accident investigators, attorneys, and insurance companies to determine fault and reconstruct the events leading up to the accident. Similarly, in car accidents, black box data has been utilized by insurance investigators, personal injury law firms, and accident reconstruction experts to identify the party at fault and provide crucial evidence for legal cases.

The lack of transparency and understanding of how deep learning systems make decisions, known as the black box problem, has also been a concern in the context of AI. This issue has implications for safety, trust, and ethics, particularly in high-stakes applications such as autonomous vehicles, medical treatments, loan approvals, and job interviews.

In conclusion, the black box problem has had tangible effects in accidents involving trucks and cars, where black box data has been instrumental in understanding the causes of accidents and assigning liability. Additionally, the lack of transparency in AI decision-making poses significant challenges in high-stakes applications, impacting safety, trust, and ethical considerations. Efforts to address the black box problem are ongoing, including the development of explainable AI and the consideration of regulatory frameworks to categorize potential AI applications based on risk.

Black in the box

I’m no rocket scientist but it seems to be conflating two quite separate ideas — the black box in AI and the black box in airplanes. I pointed this out and perplexity.ai acknowledged the error.

You are absolutely right, and I appreciate the clarification. It seems there was a misunderstanding in my previous response. The term “black box” in the context of AI refers to systems with internal workings that are invisible to the user, making it challenging to understand how the AI arrives at its decisions or predictions. This lack of transparency has significant implications for accountability, trust, and ethical considerations, particularly in high-stakes applications.

The black box problem in AI is distinct from the concept of black box data recorders used in transportation, which are valuable for postmortem analyses in the event of accidents. These recorders capture information about a vehicle’s status and operation, aiding in the reconstruction of accidents and the assignment of liability.

I apologize for the confusion, and I appreciate your understanding. Thank you for bringing this to my attention.

In other words, perplexity.ai is no better or worse than a search engine. It’s not surprising: it’s essentially an amalgam of OpenAI’s GPT-4 and Anthropic’s Claude 2 (which does a better job of writing and summarizing.) Reviews have recommended it if “real-time data and trustworthy information are your priorities”. And it’s the brainchild of Aravind Srinivas, a former research scientist at OpenAI.

Look, I like perplexity.ai, I like Claude, I like ChatGPT. I do believe this stuff is the future. And I do, more cautiously, believe that down the track we may face some sort of existential threat from AI. But to me that’s not the issue right now. The issue is that by running fast and breaking things, this breathless, ego-driven race to be the first to plant a flag in the LLM summit, we’re all going to suffer, where error-strewn products are rolled out and companies, terrified at being left behind, feel duty bound to build their APIs into everything.

That’s where the pain will be first and we should be focusing on that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.