this post was submitted on 06 Jan 2025
1 points (100.0% liked)

TechTakes

1550 readers
17 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 week ago (3 children)

CEO personally chose a price too low for company to be profitable.

What a clown.

[–] [email protected] 0 points 1 week ago (3 children)

well, yes. But also this is an extremely difficult to price product. 200$/m is already insane, but now you're suggesting they should've gone even more aggressive. It could turn out almost nobody would use it. An optimal price here is a tricky guess.

Although they probably should've sold a "limited subscription". That does give you max break-even amount of queries per month, or 2x of that, but not 100x, or unlimited. Otherwise exactly what happened can happen.

[–] [email protected] 0 points 1 week ago

I signed up for API access. I run all my queries through that. I pay per query. I've spent about $8.70 since 2021. This seems like a win-win model. I save hundreds of dollars and they make money on every query I run. I'm confused why there are subscriptions at all.

[–] [email protected] 0 points 1 week ago (1 children)

The real problem is believing that you can run a profitable LLM company.

[–] [email protected] 0 points 1 week ago (1 children)

What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

[–] [email protected] 0 points 1 week ago (1 children)

Some LLM bros must have seen this comment and become offended.

[–] [email protected] 0 points 1 week ago (2 children)

guess again

what the locals are probably taking issue with is:

If you want a more precise model, you need to make it larger.

this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology ~~and because you’re all-in on the grift~~

[–] [email protected] 0 points 1 week ago (1 children)

Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can't produce a meaningful answer. And if you're working on a halfway ambitious project, then you're virtually guaranteed to encounter a novel problem.

[–] [email protected] 0 points 1 week ago

Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

it doesn’t produce any meaningful answers for non-novel problems either

[–] [email protected] 0 points 1 week ago

look bro just 10 more ~~reps~~ gpt3s bro itl’ll get you there bro I swear bro

[–] [email protected] 0 points 1 week ago

"Our product that costs metric kilotons of money to produce but provides little-to-no value is extremely difficult to price" oh no, damn, ye, that's a tricky one

[–] [email protected] 0 points 1 week ago (3 children)

More like he misjudged subscriber numbers than price.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

Wait but he controls the price, not the subscriber number?

Like even if the issue was low subscriber number (which it isn't since they're losing money per subscriber, more subscribers just makes you lose money faster), that's still the same category of mistake? You control the price and supply, not the demand, you can't set a stupid price that loses you money and then be like "ah, not my fault, demand was too low" like bozo it's your product and you set the price. That's econ 101, you can move the price to a place where your business is profitable, and if such a price doesn't exist then maybe your biz is stupid?

[–] [email protected] 0 points 1 week ago

I believe our esteemed poster was referencing the oft-seen cloud dynamic of “making just enough in margin” where you can tolerate a handful of big users because you have enough lower-usage subscribers in aggregate to counter the heavies. which, y’know, still requires the margin to exist in the first place

alas, hard to have margins in Setting The Money On Fire business models

[–] [email protected] 0 points 1 week ago (2 children)

please explain to us how you think having less, or more, subscribers would make this profitable

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (2 children)

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can't fill the optimal batch size.

That said, the optimal batch size on today's hardware is not big (<20). I would be very very surprised if they couldn't fill it for any few-seconds window.

[–] [email protected] 0 points 1 week ago (1 children)

i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

[–] [email protected] 0 points 1 week ago

yep, original is still visible on mastodon

[–] [email protected] 0 points 1 week ago (1 children)

this sounds like an attempt to demand others disprove the assertion that they're losing money, in a discussion of an article about Sam saying they're losing money

[–] [email protected] 0 points 1 week ago (1 children)

What? I'm not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

[–] [email protected] 0 points 1 week ago (1 children)

oh, so you’re that kind of fygm asshole

good to know

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_[email protected] asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what ~~Open~~AI is operating at.

@[email protected] why did you say I am demanding someone disprove the assertion? Are you misunderstanding "I would be very very surprised if they couldn't fill [the optimal batch size] for any few-seconds window" to mean "I would be very very surprised if they are not profitable"?

The tweet I linked shows that good LLMs can be much cheaper. I am saying that ~~Open~~AI is very inefficient and thus economically "cooked", as the post title will have it. How does this make me FYGM? @[email protected]

[–] [email protected] 0 points 1 week ago

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

my god! let me fix that

[–] [email protected] 0 points 1 week ago

Yeah, the tweet clearly says that the subscribers they have are using it more than they expected, which is costing them more than $200 per month per subscriber just to run it.

I could see an argument for an economy of scales kind of situation where adding more users would offset the cost per user, but it seems like here that would just increase their overhead, making the problem worse.

[–] [email protected] 0 points 1 week ago

despite that one episode of Leverage where they did some laundering by way of gym memberships, not every shady bullshit business that burns way more than they make can just swizzle the numbers!

(also if you spend maybe half a second thinking about it you’d realize that economies of scale only apply when you can actually have economies of scale. which they can’t. which is why they’re constantly setting more money on fire the harder they try to make their bad product seem good)

[–] [email protected] 0 points 1 week ago (1 children)

They're still in the first stage of enshittification: gaining market share. In fact, this is probably all just a marketing scheme. "Hi! I'm Crazy Sam Altman and my prices are SO LOW that I'm LOSING MONEY!! Tell your friends and subscribe now!"

[–] [email protected] 0 points 1 week ago (1 children)

I’m afraid it might be more like Uber, or Funko, apparently, as I just learned tonight.

Sustained somehow for decades before finally turning any profit. Pumped full of cash like it’s foie gras by Wall Street. Inorganic as fuck, promoted like hell by Wall Street, VC, and/or private equity.

Shoved down our throats in the end.

[–] [email protected] 0 points 1 week ago

It was worth it to finally dethrone Big Taxi🙄