Nice article. As you suggest, I’m sure it’s wrong, but that’s what makes it so good. You framed the problem really well and that makes finding solutions easier.
One that I can imagine is having societies of AI. We can have carefully curated mini LLMs that are better at logic (for instance) or good at finding dark patterns. Those can operate in parallel or provide data for foundational or big mama models.
Another approach might be developing robust systems that get stronger the more they reflect on themselves. Rather than degrading in quality due to lack of human input, they actually get better. That’s a hypothetical, but I think it is plausible that AI generated logic can help train more robust logical understanding. And, closed gardens of AI generated spam might help better identify fluff from substance.
In any case, I think it will quickly go from “AGI as a monolith” into a bizarre Darwinian ecology of competing and cooperating intelligences. I do think there is something happening now that we will all feel nostalgic for soon.
I was thinking about this idea of small, hyper-targeted LLMs - but it seems like LLMs only get good when they get big (Llama-7B is loads worse that 13B for example) and trained on a lot of data... so far. So can we build these small LLMs that would work?
An analogy for the GPT-training-on-GPT point you're making is AlphaGoZero, which did learn only by playing versions of itself and did get better - but it had some ways of objectively evaluating its relative strength. Maybe what we need is a great LLM evaluator so we can do something like this?
"It will quickly go from “AGI as a monolith” into a bizarre Darwinian ecology of competing and cooperating intelligences. I do think there is something happening now that we will all feel nostalgic for soon." Love this, nice insight.
Thought provoking piece. Sure, a lots of assumptions, but not farfetched and all within the realms of possibility. I'm definitely stealing the analogy of peak oil. It matches with some of the comments of Sam Altman about the fact that improving future models will not be about growth in model size. It looks like OpenAI acknowledges that they have already reached somewhat of a glass ceiling there.
It's good to see the world is opening their eyes, slowly, to the limitations and adversarial vulnerabilities that come with deploying LLMs. But at the same time, thousands of applications are being build on top of these models everyday and rolled out to millions of users...
It doesn't take a fortuneteller to know it'll only be a matter of time before we see the downstream effects of that.
I'm thinking training only on Wikipedia post 2023 is probably a good option. Wikipedia isn't without biases, but it is aggressively curated (some would say maybe too aggressively, but that's probably better than the alternative for this usage)
If they've trained on a much larger pre-2023 dataset (as most of these LLMs have been) then it seems like refining on Wikipedia in order keep it to be up to date with the latest happenings should work. Maybe there are other human-curated spaces on the internet that could be included?
I wonder if news sources from "trusted" parties could also help in updating the model's knowledge (eg. add everything from BBC/NYT/WaPo/FT/...), essentially everywhere that has a very strong incentive to be "truthful", although (1) obviously they have their own biases, and (2) I doubt the news producers will like that very much, unless you pay them loads of money :P
Nice article. As you suggest, I’m sure it’s wrong, but that’s what makes it so good. You framed the problem really well and that makes finding solutions easier.
One that I can imagine is having societies of AI. We can have carefully curated mini LLMs that are better at logic (for instance) or good at finding dark patterns. Those can operate in parallel or provide data for foundational or big mama models.
Another approach might be developing robust systems that get stronger the more they reflect on themselves. Rather than degrading in quality due to lack of human input, they actually get better. That’s a hypothetical, but I think it is plausible that AI generated logic can help train more robust logical understanding. And, closed gardens of AI generated spam might help better identify fluff from substance.
In any case, I think it will quickly go from “AGI as a monolith” into a bizarre Darwinian ecology of competing and cooperating intelligences. I do think there is something happening now that we will all feel nostalgic for soon.
Thanks for the kind words!
I was thinking about this idea of small, hyper-targeted LLMs - but it seems like LLMs only get good when they get big (Llama-7B is loads worse that 13B for example) and trained on a lot of data... so far. So can we build these small LLMs that would work?
An analogy for the GPT-training-on-GPT point you're making is AlphaGoZero, which did learn only by playing versions of itself and did get better - but it had some ways of objectively evaluating its relative strength. Maybe what we need is a great LLM evaluator so we can do something like this?
"It will quickly go from “AGI as a monolith” into a bizarre Darwinian ecology of competing and cooperating intelligences. I do think there is something happening now that we will all feel nostalgic for soon." Love this, nice insight.
Thought provoking piece. Sure, a lots of assumptions, but not farfetched and all within the realms of possibility. I'm definitely stealing the analogy of peak oil. It matches with some of the comments of Sam Altman about the fact that improving future models will not be about growth in model size. It looks like OpenAI acknowledges that they have already reached somewhat of a glass ceiling there.
It's good to see the world is opening their eyes, slowly, to the limitations and adversarial vulnerabilities that come with deploying LLMs. But at the same time, thousands of applications are being build on top of these models everyday and rolled out to millions of users...
It doesn't take a fortuneteller to know it'll only be a matter of time before we see the downstream effects of that.
Thanks for the kind words!
I get pitched and talked to about a new LLM app every day it seems, I wonder how long until we get to the LLM winter :P
My feeling is the novelty will soon wear off, but the tech is here to stay.
I'm thinking training only on Wikipedia post 2023 is probably a good option. Wikipedia isn't without biases, but it is aggressively curated (some would say maybe too aggressively, but that's probably better than the alternative for this usage)
Yeah, but is there even enough data on wikipedia to get an LLM trained to a decent level? It seems that they need A LOT before they become good.
If they've trained on a much larger pre-2023 dataset (as most of these LLMs have been) then it seems like refining on Wikipedia in order keep it to be up to date with the latest happenings should work. Maybe there are other human-curated spaces on the internet that could be included?
I wonder if news sources from "trusted" parties could also help in updating the model's knowledge (eg. add everything from BBC/NYT/WaPo/FT/...), essentially everywhere that has a very strong incentive to be "truthful", although (1) obviously they have their own biases, and (2) I doubt the news producers will like that very much, unless you pay them loads of money :P
What is SSO-optimization? Do you mean SEO? Or SSO for Single Sign On?
Absolutely meant SEO, and I managed to write it wrong a billion times - thanks for heads up, fixed :)
So Search Engine Optimization optimization :)
Maybe I should go to an ATM machine to pay you for the editorial work 🤦♂️