Anonymous
Not logged in
Talk
Contributions
Log in
Request account
Rest of What I Know
Search
Editing
LLM Conversational Level
(section)
From Rest of What I Know
Namespaces
Page
Discussion
More
More
Page actions
Read
Edit
History
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Examples == === Word Salad 1 === {{Tweet | name = Emmett Shear | username = eshear | text = The incredibly clever trick that all frontier AI labs use which makes LLMs so fabulously powerful: they barely regularize at all, resulting in models which are massively overfit. This would be a problem, except they’re overfit to the domain of all human cultural knowledge. | date = Aug 30, 2025 | ID = 1961975432238223675 | ref-name = Tweet_1961975432238223675 | block = true }} {{Tweet | name = rizz or bust | username = rizzorbust | replyto = eshear | text = what a word salad emmett also reminding us he was ceo of OAI for 24hrs The incredibly clever trick that all frontier AI labs use which makes LLMs so fabulously powerful: they barely regularize at all, resulting in models which are massively overfit. This would be a problem, except they’re overfit to the domain of all human cultural knowledge. | date = Aug 30, 2025 | ID = 1962255199470321841 | ref-name = Tweet_1962255199470321841 | block = true }} Gemma 3n 4b flawlessly comprehends the message, so the LCL is bounded above by 4B. [[File:{{#setmainimage:Emmett Shear Tweet Evaluation by Gemma 3n 4b.png}}|frame|center]] I couldn't get a smaller Gemma model at the time since OpenRouter's Gemma 2b doesn't generate and just returns 400. It is interesting to observe what happens when you use smaller models, though. And Llama 3.2 conveniently has a few available. The comparison rapidly illustrates the difference. [[File:Emmett Shear Tweet Evaluation by Llama 3.2.png|frame|center]] Repeating the experiment has the smaller model more often describe the tweet as word salad, while the larger model concludes that more rarely, though it does not often explain the tweet correctly. The largest models like Opus 4.1 understand it every time and never make a mistake. So Twitter user rizzorbust likely exceeds LCL 1B but is lower than 4B and perhaps lower than 3B. === Word Salad 2 === {{Tweet | name = Roshan George | username = arjie | replyto = jstephencarter | text = Citizens *are* prioritized. They have the best folk working for them rather than just whomever they could find. This is the beauty of America: anyone can be an entrepreneur and when they are, they can hire the guy they think can do the job best. | date = Jan 2, 2025 | ID = 1874921570935996889 | block = true }} {{Tweet | name = Stephen Carter | username = jstephencarter | replyto = arjie | text = Silly word salad. I can’t hire an eight-year-old. I can’t hire certain types of criminals. It’s outrageous that I can hire a non-citizen when they are American citizens capable of doing the job, and there absolutely are. | date = Jan 2, 2025 | ID = 1874923807724691763 | block = true }} As an example, the latter tweet refers to the former as "silly [[wikipedia:Word salad|word salad]]", implying that it is a "confused or unintelligible mixture of seemingly random words and phrases" (as described by Wikipedia). If an LLM were to be able to decode the text it would imply that it is not, in fact, unintelligible, and consequently it would imply that the reader who considers it word salad has an LCL bounded by that of the text. [[File:Screenshot of Claude 3.5 Sonnet interpreting a tweet by arjie.png|frame|center|This is a pretty big commercial model so it's not a surprise it gets it right]] [[File:Screenshot of Llama 3.2 - 3B interpreting a tweet by arjie.png|frame|center|This Llama 3.2 is quite small, having only 3B parameters (which is still large compared to the GPT-2 1.5B model)]] Based on this, we can safely conclude that the text has an LCL of at most 3 billion. We must also conclude that the person unable to comprehend the original tweet has an LCL lower than 3 billion as well.
Summary:
Please note that all contributions to Rest of What I Know are considered to be released under the Creative Commons Attribution-ShareAlike (see
Rest of What I Know:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Wiki tools
Wiki tools
Special pages
Page tools
Page tools
User page tools
More
What links here
Related changes
Page information
Page logs