howrar

joined 2 years ago
[–] [email protected] 6 points 10 hours ago* (last edited 10 hours ago) (1 children)

This whole discussion you see above is part of the process of repeating a study. You can't just do exactly what the previous study did and expect all the flaws to magically disappear. You need to first uncover the flaws, and more eyes and collaboration means a higher likelihood that the flaws get found, hence the importance of these discussions. Then you redesign the experiment to fix those flaws, and then you can run it again.

[–] [email protected] 100 points 1 day ago (3 children)

How many bits is a /s mask?

[–] [email protected] 7 points 1 day ago* (last edited 1 day ago) (1 children)

We can get a rough estimate for your first question with the information we have. They've shared that it costs them about $200/month, and we can see from the sidebar that we have 3k users per 6 months (estimate for number of active users). That means approximately 7c/month per user.

[–] [email protected] 2 points 1 day ago

I'm pretty sure you're supposed to chew them. I don't know how you're supposed to get any of the oyster flavour otherwise.

[–] [email protected] 18 points 2 days ago (1 children)

What do you mean by "do"? Are you looking for advice on how to interact with them? Or what society should do to help them?

[–] [email protected] 10 points 2 days ago

I'm amused by this 301.

[–] [email protected] 3 points 2 days ago

Naw, that definitely defies the laws of physics.

[–] [email protected] 7 points 2 days ago (1 children)

What's wrong with the chargers? We have one of those brushes here and it just lives on the charger when not in use. We've never had any issues.

[–] [email protected] 3 points 2 days ago

Only if you eat it under your umbrella

[–] [email protected] 5 points 3 days ago (1 children)

Oof. What's the prognosis?

[–] [email protected] 3 points 4 days ago (1 children)

Food / cooking:

Others:

[–] [email protected] 0 points 4 days ago

Reread your post and I'm still not getting it. What was the point?

 

The Homework Machine, oh the Homework Machine,

Most perfect contraption that's ever been seen.

Just put in your homework, then drop in a dime,

Snap on the switch, and in ten seconds' time,

Your homework comes out, quick and clean as can be.

Here it is—"nine plus four?" and the answer is "three."

Three?

Oh me . . .

I guess it's not as perfect

As I thought it would be.

 

I don't know very well how the legislative process works, but to the best of my understanding, the last step involves a vote where we decide whether to pass a bill. A simple majority means it passes, otherwise it's rejected. This leads to an interesting (and possibly dangerous) dynamic where the government can be very different depending on whether or not the winning party has a majority. It means that when we have a majority, it can lead to what we call "tyranny of the majority". It also means that there's very little difference in how much influence a smaller party can have between having a single MP until the point where they can team up with another party to form a majority. It means that even if we get proportional voting for selecting MPs, we might still need to vote strategically in order to either ensure or prevent a majority government, or to encourage a specific coalition government.

Do we have any potential solutions for this? Or did I maybe misunderstand how things work and this isn't actually a problem?

 

If there's insufficient space around it, then it'll never spawn anything. This can be useful if you want to keep a specific spawner around for capture later but don't want too spend resources on killing the constant stream of biters.

 

Ten years ago, Dzmitry Bahdanau from Yoshua Bengio's group recognized a flaw in RNNs and the information bottleneck of a fixed length hidden state. They put out a paper introducing attention to rectify this issue. Not long after that, a group of researchers at Google found that you can just get rid of the RNN altogether and you still get great results with improved training performance, giving us the transformer architecture in their Attention Is All You Need paper. But transformers are expensive at inference time and scale poorly with increasing context length, unlike RNNs. Clearly, the solution is to just use RNNs. Two days ago, we got Were RNNs All We Needed?

view more: next ›