OmnipotentEntity

joined 1 year ago
[–] [email protected] 6 points 2 days ago

Iirc, some SMR designs also have this property designed, though this is the very first I've heard of it actually being tested at scale.

[–] [email protected] 13 points 2 days ago (2 children)

The scam is that they are actually doing the work, getting paid well

Listen. I know that there are some really shitty stuff going on in North Korea, and very real threats that their government is capable of, and it sucks for the people living there who have to do this work under threat of death.

But if you say that "the scam" is they're doing work and receiving full pay for work done, I'm going to make fun of you. Oh no, someone outside of the West did work and was slightly less exploited by capital than usual in the process. Horror upon horror.

[–] [email protected] 21 points 3 days ago

Most recently, other than Trump, George HW Bush lost the election while incumbent. Prior to that it was Jimmy Carter.

The next most recent person to win the election but lose the popular vote was George W Bush, prior to that is was Harrison back in 1888.

[–] [email protected] 2 points 1 week ago (1 children)

Please don't tell me you, unironically, actually use the Carmack rsqrt function in the year of our Linux Desktop 2024.

Also if you like, you can write unsafe Rust in safe Rust instead.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (3 children)

std::mem::transmute

[–] [email protected] 18 points 3 weeks ago* (last edited 3 weeks ago)

6 million cars, the fine is $140 million. That's $24 or so per car. There's no way that GM saved only $24/car doing this. So the fine is just a cost of doing business.

EDIT:

The company has also voluntarily retired about 50 million tons of carbon dioxide pollution credits, which are issued by the E.P.A. and used by auto companies to make it easier to comply with increasingly stringent federal tailpipe emissions standards. G.M. estimates the value of the loss of the credits at about $300 million, reflecting what it paid for them a decade or so ago. However, the market value of those carbon credits varies, and a more recent government estimate of $86 per credit would put the value at about $4.6 billion.

This is probably where the actual sting to them is.

[–] [email protected] 1 points 3 weeks ago

Orb mommy 🔮🔮🔮🔮

[–] [email protected] 1 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

(please attend to primaries next time...)

So... should I have voted for Marianne Williamson or Dean Phillips, keeping in mind Dean Phillips formally withdrew from the race before my state's primary, and Marianne Williamson couldn't have won if she had sweeped every state after and including mine?

I think the problem is mostly that the US system of elections is turbo mega fucked.

[–] [email protected] 9 points 1 month ago (1 children)

In 2-3 days the New York Times is going to breathlessly report that Biden called up Netanyahu, scolded him, and gave him yet another ultimatum.

[–] [email protected] 10 points 1 month ago (1 children)

Armorosus Diligentia

I think.

[–] [email protected] 2 points 1 month ago

Solar attached to homes is not really a scalable solution on its own. For one thing, it's a massive liability for the utility. Power is produced on an as needed just in time fashion. Putting extra power onto the grid just means that the load is less predictable, and if the utility doesn't have storage, this extra power could be excess, and there isn't a convenient and safe way to dump persistent excess power on a grid level, and they can't phone you up to ask you to shut down your solar arrays either.

This is why you see negative energy prices from time to time. Oversupply is a problem and it can wreck equipment.

 

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

 

You might know the game under the name Star Control 2. It's a wonderful game that involves wandering around deep space, meeting aliens, and navigating a sprawling galaxy while trying to save the people of Earth, who are being kept under a planetary shield.

 

Subverting Betteridge's law of headlines. Yes.

view more: next ›