h3ndrik

joined 1 year ago
[–] [email protected] 29 points 6 months ago (1 children)

Craigslist / Facebook Marketplace

[–] [email protected] 14 points 6 months ago* (last edited 6 months ago) (2 children)

I think the author is a bit late to the game. There are like 20 different forks of Mastodon to address exactly that, which some developers have already migrated to. Additionally there are Misskey, Pleroma with their respective forks. Some of them are pretty active.

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago)

Hmm. They're dirt cheap so that is a pro. I don't think they're made to withstand mechanical load. So good for internal connections but less so if you're moving around the wires constantly. There are beefier and more elaborate connectors available for that. But in my experience the JST connectors do their job well for normal electronics projects.

One thing to consider is the current rating. A quick googling tells me a common JST connector is rated for 3 Amps. That's not a lot. About 75 LEDs per connector to stay within the limit. (Given 5V WS2812 RGB at full brightness. Or ~220 if it's 12 Volt strips) So if your led strips aren't longer than that, I'd say you're fine.

But I'm not an expert on those things. I can't tell you whether to choose the SM family or another one... But:

The Wikipedia article says JST SM connectors are used in some LED strips...

(So. I'd use them. But they're not "the best solution". They're the minimum to do an alright job, make sure you can't connect them backwards etc and apart from that, made to be as cheap as possible. The best would probably be some high quality german engineered products or sth like that (the country doesn't really matter...))

[–] [email protected] 9 points 6 months ago* (last edited 6 months ago) (2 children)

JST connectors? They're fairly common in all sorts of electronics.

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago) (6 children)

I'm sorry. Now it gets completely false...

Read the first paragraph of the Wikipedia article on machine learning or the introduction of any of the literature on the subject. The "generalization" includes that model building capability. They go a bit into detail later. They specifically mention "to unseen data". And "leaning" is also there. I don't think the Wikipedia article is particularly good in explaining it, but at least the first sentences lay down what it's about.

And what do you think language and words are for? To transport information. There is semantics... Words have meanings. They name things, abstract and concrete concepts. The word "hungry" isn't just a funny accumulation of lines and arcs, which statistically get followed by other specific lines and arcs... There is more to it. (a meaning.)

And this is what makes language useful. And the generalization and prediction capabilities is what makes ML useful.

How do you learn as a human when not from words? I mean there are a few other posibilities. But an efficient way is to use language. You sit in school or uni and someone in the front of the room speaks a lot of words... You read books and they also contain words?! And language is super useful. A lion mother also teaches their cubs how to hunt, without words. But humans have language and it's really a step up what we can pass down to following generations. We record knowledge in books, can talk about abstract concepts, feelings, ethics, theoretical concepts. We can write down how gravity and physics and nature works, just with words. That's all possible with language.

I can look it up if there is a good article explaining how learning concepts works and why that's the fundamental thing that makes machine learning a field in science... I mean ultimately I'm not a science teacher... And my literature is all in German and I returned them to the library a long time ago. Maybe I can find something.

Are you by any chance familiar with the concept of embeddings, or vector databases? I think that showcases that it's not just letters and words in the models. These vectors / embeddings that the input gets converted to, match concepts. They point at the concept of "cat" or "presidential speech". And you can query these databases. Point at "presidential speech" and find a representation of it in that area. Store the speech with that key and find it later on by querying it what obama said at his inauguration... That's oversimplified but maybe that visualizes it a bit more that it's not just letters of words in the models, but the actual meanings that get stored. Words get converted into an (multidimensional) vector space and it operates there. These word representations are called "embeddings" and transformer models which is the current architecture for large language models, use these word embeddings.

Edit: Here you are: https://arxiv.org/abs/2304.00612

[–] [email protected] 10 points 6 months ago* (last edited 6 months ago)

Also the software needs to be efficient. Use less RAM and CPU cycles. And I don't think the ActivityPub protocol in itself is very efficient. I'd like those aspects compared to an old federated technology like NNTP or email.

But I'd agree on the things in top. Content should get compressed and cached on demand. Neither transferred every time from the original instance, nor transferred without a user ever viewing it. Caching on demand or a DHT (P2P) storage backend could do that.

[–] [email protected] 14 points 6 months ago (1 children)

https://fediverse.info/explore/projects

There are a few projects that give some idea a new spin. Most of them are about microblogging or alternative platforms for some existing concepts, though.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago) (1 children)

Hmm, misst man soetwas nicht an Reichweite und Abrufzahlen? Die habe 14000 Views. Und wenn die 84GB an Speicherplatz stimmen, würde ich sagen man kann das für 7€ im Monat hosten. Mir ist jetzt nicht ganz klar wie man die Kosten/Nutzen Analyse anstellt...

Keine Anderen Leute veröffentlichten lassen macht alles einfacher. Dann sieht es nicht so aus als wären Videos Dritter von der EU abgesegnet. Und man spart sich 'ne Menge Personalkosten für Moderation, Support etc. Das was das teure an so Plattformen wäre.

Und dank Föderation können die Nutzer ja auch prima andere Instanzen verwenden und diese fokussiert sich halt auf EU Inhalte.

[–] [email protected] 2 points 6 months ago

iptables or nftables. Or firewalld depending on the Linux distro and version you use.

Sometimes the Arch Wiki has some good info on specific configurations. I mean it's not that easy to write firewall rules on the command line. But it's no rocket science either.

[–] [email protected] 5 points 6 months ago* (last edited 6 months ago) (3 children)

Hmm. It's kind of just a VPN. It tunnels your traffic and terminates it at some server with those IPs. It's just that NordVPN etc make you share an IP with other users and don't offer port forwarding. But the rest of Hoppy isn't necessarily unique, it's just a specific configuration of a VPN.

I rented a VPS and installed wireguard myself. And created the firewall rules to forward (some) incoming traffic to my home server. That's the same thing Hoppy does. Just that Hoppy does the setup of the firewall and Wireguard for you.

But I'm not aware of any similar services that do it automatically. Maybe something like pagekite.net comes close.

So I don't know if that's the correct solution to what you're doing but I'd say one alternative would be to rent any small server, install Wireguard both there and on the RasPi, connect them and configure Wireguard on the RasPi so all outgoing traffic goes through the tunnel. And then configure the like 3 firewall rules on the VPS to make it forward incoming traffic on all ports to the RasPi.

[–] [email protected] 0 points 6 months ago

I'd say there are more reasons not to breed animals, kill them and eat their flesh... I mean avoiding pandemics is nice, too. But that's not the first reason that comes to my mind 😆

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago)

SXMO?

I think closest to your idea is speech recognition and an AI assistant. You can give it commands that way.

I don't think there's much that includes fat fingers and touchscreens and is possible without a graphical UI.

You could buy an old Nokia from the 90s with lots of text menus. That won't speed things up but it's certainly less icons, more text and as a bonus you can feel the physical buttons without looking at the phone.

Or a Blackberry with a qwerty-keabord on it. Or use convergence and attach a proper keyboard via USB and install Termux.

Theoretically you could have it project a "holographic" keyboard onto the desk in front of you. Or use VR glasses.

Or hold it sideways and type with 8 fingers simultaneously alike on a stenotype keyboard.

That'd be ways to improve on the keyboard / input method and allow you to use the CLI in it's current form. I mean the CLI itself is already available. It's just cumbersome to use it. I'd say a speech assistant is more it, if you want an entirely different concept and not just a better keyboard and/or larger screen.

view more: ‹ prev next ›