The Strawberry has landed! OpenAI has released o1-preview, its latest impressive demo, just in time for the current funding round. [Press release] The new hotness in Strawberry is chain-of-thought …
“this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.
I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.
Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.
Was hoping to talk about it but i dont think im going to find that here.
holy fuck they registered 2 days ago and 9 out of 10 of their posts are specifically about the new horseshit ChatGPT model and they’re gonna pretend they didn’t come here specifically to advertise for that exact horseshit
oh im just a smol bean uwu promptfan doing fucking work for OpenAI advertising for their new model on a fucking Saturday night
Awful.systems may contain malware or other harmful content.
oof, this one stings
also now I’m paranoid the shitheads who operate the various clouds will make the mistake of using the LLM as a malware detector without realizing it’s probably just matching the token for the TLD
see we were supposed to fall all over ourselves and debate this random stranger’s awful points. we weren’t supposed to respond to their disappointment with “good, fuck off” because then they can’t turn the whole thread into garbage
When the setup is “we run each query multiple times” the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.
Like, that sounds like a pretty impressive CS paper, “we figured out how to run inference N times but pay roughly the cost of one” is a hell of an abstract.
Are you saying thats not true? Anything to substaniate your claim?
“this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.
“we found that the Turbo button on the outside of the DC wasn’t pressed, so we pressed it”
I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.
Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.
Was hoping to talk about it but i dont think im going to find that here.
I’m far too drunk for “it can’t be that stupid, you must be prompting it wrong” but here we fucking are
oh no shit? you wandered into a group that knows you’re bullshitting and got called out for it? wonder of fucking wonders
holy fuck they registered 2 days ago and 9 out of 10 of their posts are specifically about the new horseshit ChatGPT model and they’re gonna pretend they didn’t come here specifically to advertise for that exact horseshit
oh im just a smol bean uwu promptfan doing fucking work for OpenAI advertising for their new model on a fucking Saturday night
and as for more important news: the Costco scotch isn’t good, its flavor profile is mostly paint thinner
but their tequila’s still excellent
even bad bathtub gin sounds more appealing
The Kirkland Signature bottled-in-bond Bourbon is well worth the price. Not the best but surprisingly decent. And this concludes my shameless plug.
a lot of their liquor is surprisingly very good! that’s why it’s also surprising how bad their scotch is
Well, there’s your problem
I read this in Justin Roczniak’s voice.
@o7___o7 @blakestacey yay Liam
[ACTIONABLE THREATS]
If only you’d asked ChatGPT “is awful.systems a good place to fellate LLMs”
I asked Gemini!
Reply:
SLANDER, I SAY
oof, this one stings
also now I’m paranoid the shitheads who operate the various clouds will make the mistake of using the LLM as a malware detector without realizing it’s probably just matching the token for the TLD
we need something for this kind of “I hope to buy time while I await the bomb exploding” shit, in the style of JAQing off
see we were supposed to fall all over ourselves and debate this random stranger’s awful points. we weren’t supposed to respond to their disappointment with “good, fuck off” because then they can’t turn the whole thread into garbage
Kay mate, rational thought 101:
When the setup is “we run each query multiple times” the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.
Like, that sounds like a pretty impressive CS paper, “we figured out how to run inference N times but pay roughly the cost of one” is a hell of an abstract.
“…we pay for one,
suckersVCs pay for the other 45”