oktoberpaard

joined 1 year ago
[–] [email protected] 7 points 6 days ago

It’s off by default, but activated when you end your search query with a question mark. That option can be turned off.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago) (1 children)

But 2K and 4K do refer to the horizontal resolution. There’s more than one resolution that’s referred to as 2K, for example 2048 x 1080 DCI 2K, but also 1920 x 1080 full HD, since it’s also almost 2000 pixels wide. The total number of pixels is in the millions, not thousands.

For 4K some common resolutions are 4096 x 2160 DCI 4K and 3840 x 2160 UHD, which both have a horizontal resolution of about 4000 pixels.

[–] [email protected] 0 points 1 week ago (1 children)

Relax, I’m probably worried just as much about climate change and waste of resources as you are, if not more. My take was an ironic take.

[–] [email protected] 0 points 1 week ago (11 children)

Thought for 95 seconds

Rearranging the letters in "they are so great" can form the word ORION.

That’s from the screenshot where they asked the o1 model about the cryptic tweet. There’s certainly utility in these LLMs, but it made me chuckle thinking about how much compute power was spent coming up with this nonsense.

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

It’s called Markdown. If you know how to use it, it’s very convenient, but if not, it can cause all kinds of unexpected effects. It’s also the reason that **test** is formatted as test. You can often force new lines by ending the line with two spaces or a backslash character. Let’s test:

This is a line
And this is another line

In this case it was a backslash character. So what I typed is:

This is a line\\ And this is another line

With spaces:

This is a line
And this is another line

[–] [email protected] 2 points 1 week ago (1 children)

I use that one on iOS. In Firefox I use the native functionality (the cookiebanners.service.mode flag). See https://community.mozilla.org/en/campaigns/firefox-cookie-banner-handling/. I also set cookiebanners.ui.desktop.enabled to true to make this setting appear in the settings menu.

[–] [email protected] 6 points 2 weeks ago

Using English is the only way that all my colleagues are able to read it, but if it’s just meant for you, or only for Spanish speaking people, I’d say why not.

[–] [email protected] 16 points 3 weeks ago (1 children)

That’s not as effective, since it can’t block anything that’s hosted from a hostname that also serves regular content without also blocking the regular content. It also can’t trick websites into thinking that nothing is blocked and it can’t apply cosmetic rules. I use it for my devices, but in browsers I supplement it with uBlock Origin (or whatever is available in that browser).

[–] [email protected] 3 points 3 weeks ago

I agree. I think people might have the idea that the states dictates the contents, but that’s not at all how it works in well functioning democracies. It’s there to serve the public interest: to have a relatively unbiased news outlet that’s accessible to all and without (or with little) commercial interests. It coexists with commercial news outlets.

[–] [email protected] 10 points 1 month ago

That’s because Bitwarden used various methods to enable auto-fill in places where the native auto-fill capability of Android doesn’t work. See https://bitwarden.com/help/auto-fill-android/ for an explanation.

[–] [email protected] 2 points 1 month ago

Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.

I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.

[–] [email protected] 10 points 1 month ago (2 children)

The only time I can remember 16 GB not being sufficient for me is when I tried to run an LLM that required a tad more than 11 GB and I had just under 11 GB of memory available due to the other applications that were running.

I guess my usage is relatively lightweight. A browser with a maximum of about 100 open tabs, a terminal, a couple of other applications (some of them electron based) and sometimes a VM that I allocate maybe 4 GB to or something. And the occasional Age of Empires II DE, which even runs fine on my other laptop from 2016 with 16 GB of RAM in it. I still ordered 32 GB so I can play around with local LLMs a bit more.

view more: next ›