OpenAI Drama: What Happened, What is Q* and What's Next

The OpenAI chaos was a good thing

Have you caught your breath yet? The OpenAI saga was the wildest five days I’ve experienced in AI, even outpacing the original release of ChatGPT. It was Succession meets Silicon Valley meets Jersey Shore.

I won’t try to recap the saga (I already did that on my Twitter), but I do want to share a few thoughts about the OpenAI saga, why the chaos happened in the first place, why it’s important, and what is next for not only OpenAI but for all of AI.

First, though, I want to congratulate the OpenAI team on standing strong, sticking together and coming out of a really difficult week more united than ever.

Let’s dive in.

1) Why OpenAI’s Leadership Matters: The Secret Q* Project

My private AI messaging groups are all on fire talking about a recent report that OpenAI researchers warned the board about a new AI breakthrough right before Sam Altman was fired. The project is known internally at OpenAI as Q* (or Q-star) — and it could put us a major step closer to an AI with human-level intelligence (also known as AGI — artificial general intelligence). In the simplest terms, it sounds like OpenAI has developed an AI that can more efficiently train itself.

Nobody outside of OpenAI really knows much about Q* yet, but it’s not a stretch to assume it’s related to Q-learning. Q-learning is a type of machine learning algorithm. At its core, Q-learning is about teaching a computer to make smart decisions based on trial and error.

I will try to explain what Q* star is and why it matters. My explanation will be a bit oversimplified, but I promise to write longer newsletters on these topics in the future.

Credit: Midjourney

  • So what happened? Reportedly, OpenAI made a major breakthrough in its AI system through a project known as Q*.

    This breakthrough is reportedly one of the reasons the board moved to oust Sam Altman — because they were potentially concerned OpenAI is moving too fast.

  • Why does it matter if OpenAI is moving fast? Some people believe that humans could lose control of an AI superintelligence (AGI, or artificial general intelligence) if AI development moves too fast and/or without safety measures in place. Movies like Terminator are Hollywood examples of an out-of-control AI, though there’s no evidence that we are anywhere close to AGI or that AI will eventually go rogue.

  • Why move fast on AI? Other people believe that, by moving fast, we can save more lives because AI can come up with solutions to problems such as cancer, aging, and climate change. The faster we get to super-intelligence, the better our world becomes. There are also other world players who might cause more harm if they developed a superintelligent AI first.

  • What is Q*? Nobody outside of OpenAI can definitively say what Q* star is, but it is rumored to 1) be able to solve math problems it hasn’t seen before (a huge achievement for AI and large language models if true) and 2) it may be related to Q-Learning.

  • What is Q-Learning? In fifth-grade terms? It’s an approach in machine learning where an AI model can learn and iterate over time by letting the AI find the best path to an answer/solution/reward (e.g. solve complex equations). It does this through trial and error. The AI is rewarded when it finds the right answer and finds the shortest and most optimal route to an answer or reward over time.

  • Why does this matter? If AGI is possible, OpenAI is the company most likely to develop it because it is the farthest along. Whoever controls OpenAI could control AGI. So, the governance of OpenAI matters.

If OpenAI has made a major breakthrough related to Q-learning, it will likely mean that they are a step closer to building the fabled AGI (artificial general intelligence) — an AI that is smarter than a human and could change the course of history. That’s the sort of thing that could theoretically scare the OpenAI board enough to fire their high-flying CEO and slow the pace of AI development.

2) OpenAI Needs to Be Reborn with Better Governance (Here’s What It Should Do)

I think it’s clear that we as a society should care about how OpenAI is governed, especially given the chaos of the last week. A lot of the chaos that consumed OpenAI comes from OpenAI’s weird structure: it’s a non-profit that tries to control a for-profit through a convoluted series of holding companies.

Essentially, just four people on a non-profit board could decide the fate of the most important AI company (and perhaps the most important tech company) on the planet. None of those board members included Microsoft (OpenAI’s biggest partner), OpenAI’s investors, or corporate leaders with significant board experience.

All of the chaos stems from the fact that OpenAI started out as a nonprofit but needed to spin out a for-profit subsidiary to raise billions of dollars to train its large language models — GPT-3 and GPT-4 in particular.

I’ve actually been involved in the spin-off of a for-profit AI company (Sama AI, a leader in AI data training) out of a non-profit (the Leila Janah Foundation, which runs the Give Work Challenge to fund new businesses and thus more jobs in Kenya and Uganda). We had to weigh the same issues OpenAI considered when it made its for-profit arm. We didn’t go OpenAI’s route, though.

I talk about that experience — and give specific recommendations for how OpenAI should structure its new board — in my latest column in The Information. While you normally have to be a subscriber to see articles, I’ve included a free link so you can read my piece. (Enter your email at the bottom of the page.)

3) There’s Never Been a Better Time to Build a Startup

Taking selfies at the first of what is hopefully many future OpenAI DevDays!

You might be tempted to think the chaos around OpenAI is not good for AI or startups in general. I strongly disagree — it’s probably the best time to ever work in AI or build something because the chaos opens up more interest and more opportunities to build something new.

OpenAI DevDay — OpenAI’s first-ever developer conference just over a week before the chaos — was a magical experience. I met a lot of top developers, researchers, CTOs and OpenAI executives. We watched Sam Altman announce GPT-4 Turbo (a faster, better version of GPT-4 with access to data from April 2023) and the GPT app store. Now, everyone can make their own custom ChatGPTs — all without writing a line of code. And let’s not forget the Assistants API, which essentially lets you build your own ChatGPT within your own app (including its image and document-scanning technology).

The energy in the room that day reminded me that today is 100% the best time to start building something new. If you’ve ever thought about taking the plunge, now is the time to build.

It’s never been cheaper to start a company, especially an AI company. Instead of having to hire a ton of developers, marketers and salespeople, AI can help you write code, create marketing materials and optimize your sales copy. AI gives you the ability to test things faster — with fewer people — and iterate quickly until you find a product-market fit. And your product can feel magical, thanks to things like the OpenAI Assistants API.

Slow markets are where great startups are made. Easy money (in the form of low-interest rates) leads to false positives and overexuberance. The startups that survive the current tech downturn will emerge as leaders in their spaces.

Thank you, OpenAI, for inviting Matt, Katya and me to the show. And thank you for reminding me why I love AI so much — it’s filled with people deeply excited about the magical products they are creating. I’m confident next year’s DevDay will be even more incredible, especially with Sam Altman back at the helm.

Closing Thoughts and Thinking Forward

We will look at the chaos of the past week as the rebirth of OpenAI. It will execute faster than ever now because the team is united, and Sam Altman will be more motivated than ever. This team will want to prove a point to the board that tried to slow them down. Their attempts to slow things down will, in fact, backfire — I expect the OpenAI team to ship at lightning speed.

While questions will arise around everything from Sam Altman’s initial dismissal to how he runs the organization to who will eventually sit on the board, what has become clear is that OpenAI is building AI that will blow our minds. Now’s the time to fix OpenAI’s governance (specifically the board).

Now that everyone realizes how important OpenAI’s board truly is, the organization is likely to bring on people with significant experience and talent to govern the organization moving forward. Ultimately, we will end up with a more experienced and thoughtful board that will be better able to handle OpenAI’s new AI innovations, including potentially AGI.

From our Partners:

  • Founders should free up their time as much as possible by offloading their busy work. Outsource it to an EA. I use Athena — their EAs come with a playbook on optimizing your life at a shockingly good price. The first month’s free if you sign up through me.

  • Founders shouldn’t be dealing with insurance or determining which checks to write to which state governments. Outsource that, too. We use Levy at Octane AI.

  • You can also optimize your work by integrating AI into every facet of your life. Matt and I teach a class, Co-working with AI, where we teach people how to get the most out of their AIs like ChatGPT. The next cohort starts in January. And it’s 25% off today through next Wednesday if you use the code MAVENMONDAY! This time, we’ll also teach our students how to build custom GPTs to enhance their lives.

  • Didn’t get enough of the AI drama? My co-founder Matt Schlicht talked about the OpenAI chaos on Bloomberg and CNBC.

~ Ben