Image description
Image shows user joined two weeks ago.
.
Yikes. Could be a troll (I hope it’s a troll)
Image shows user joined two weeks ago.
.
Yikes. Could be a troll (I hope it’s a troll)
The video about Anime and Propanganda is very good and reccomended. As a progressive weeb living in Japan, a very cathartic watch.
From para-social to faux-social!
SaaS = Storage as a Service Sneer as a Service
Pedantic note: Yes, Meditations (a phisosophical treatise) was written in Koine, Commentarii de Bello Gallico (veni, vedi, vici—self-aggrandizing combat-reports meant for the senate and propaganda) or other “published” works from Caesar were not.
Although bonus points, the ancient sources portray Caesar (a proper educated major family Patrician) as speaking his dying words—if reported saying anything at all—in Greek, not in Latin: “Καὶ σὺ τέκνον” (Even you, child) rendered in Shakespeare as “Et tu, Brute”.
And also closing with:
Nvidia insists that it “wins on merit, as reflected in our benchmark results and value to customers.” And Nvidia does have the best stuff — but that’s not what the DOJ, Warren, or France are concerned about, is it?
To tie the bow nicely.
I should have used the preview ^_^
PS: Again!
Reading about the hubris of young Yud is a bit sad, a proper Tragedy. Then I have to remind myself that he remains a manipulator, and that he should be old enough to stop believe—and promote—in magical thinking.
More tedious work with worse pay \o/
Also a subjectively bad one at that—given his america-brained position on wanting to maintain a single executive not that suprising but:
*Epithet
Haven’t read the whole thing but I do chuckle at this part from the synopsis of the white paper:
[…] Our results suggest that AlphaProteo can generate binders “ready-to-use” for many research applications using only one round of medium-throughput screening and no further optimization.
And a corresponding anti-sneer from Yud (xcancel.com):
@ESYudkowsky: DeepMind just published AlphaProteo for de novo design of binding proteins. As a reminder, I called this in 2004. And fools said, and still said quite recently, that DM’s reported oneshot designs would be impossible even to a superintelligence without many testing iterations.
Now medium-throughput is not a commonly defined term, but it’s what DeepMind seems to call 96-well testing, which wikipedia just calls the smallest size of high-throughput screening—but I guess that sounds less impressive in a synopsis.
Which as I understand it basically boils down to “Hundreds of tests! But Once!”.
Does 100 count as one or many iterations?
Also was all of this not guided by the researchers and not from-first-principles-analyzing-only-3-frames-of-the-video-of-a-falling-apple-and-deducing-the-whole-of-physics path so espoused by Yud?
Also does the paper not claim success for 7 proteins and failure for 1, making it maybe a tad early for claiming I-told-you-so?
Also real-life-complexity-of-myriads-and-myriads-of-protein-and-unforeseen-interactions?
Another dumb take from Yud on twitter (xcancel.com):
@ESYudkowsky: The worst common electoral system after First Past The Post - possibly even a worse one - is the parliamentary republic, with its absurd alliances and frequently falling governments.
A possible amendment is to require 60% approval to replace a Chief Executive; who otherwise serves indefinitely, and appoints their own successor if no 60% majority can be scraped together. The parliament’s main job would be legislation, not seizing the spoils of the executive branch of government on a regular basis.
Anything like this ever been tried historically? (ChatGPT was incapable of understanding the question.)
BasicSteps™ for making cake:
Any further details are self-evident really.
Quinn enters the dark and cold forest, crossing the threshold, an omnipresent sense of foreboding permeates the air, before being killed by a grue.
I realize it’s probably a toy example but specifically for “cats” you could achieve the similar results by running a thesaurus/synonym-set on your stem words. With the added benefit that a client could add custom synonyms, for more domain-specific stuff that the LLM would probably not know, and not reliably learn through in-prompt or with fine-tuning. (Although i’d argue that if i’m looking for cats, I don’t want to also see videos of tigers, or based on the “understanding” of the LLM of what a cat might be)
For the labeling of videos itself, the most valuable labels would be added by humans, and/or full-text search on the transcript of the video if applicable, speech-to-text being more in the realm of traditional ML than in the realm of GenAI.
As a minor quibble your use case of GenAI is not really “Generative” which is the main thing it’s being sold as.
No no no it’s fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.