vcmj

joined 1 year ago
[–] [email protected] 4 points 4 days ago

Surprisingly just setting the systemd flag in WSL settings worked, though for a long time I simply didn't use systemd.

[–] [email protected] 13 points 5 days ago* (last edited 5 days ago) (3 children)

I use Arch in WSL BTW. This is not a joke its actually quite nice

[–] [email protected] 0 points 2 months ago

It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.

[–] [email protected] 0 points 3 months ago (1 children)

Godot does have a special thing for mesh instancing, I think variations were possible as well like different colored triangles maybe? https://docs.godotengine.org/en/stable/tutorials/performance/vertex_animation/animating_thousands_of_fish.html

[–] [email protected] 9 points 4 months ago (1 children)

The way I understand the users didn't necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.

[–] [email protected] 2 points 7 months ago

I do think we're machines, I said so previously, I don't think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don't see why it needs to be more, but again, all personal opinion.

[–] [email protected] 5 points 7 months ago (4 children)

I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don't know what you actually mean.

[–] [email protected] 6 points 7 months ago (7 children)

Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense... That's a whole other kettle of fish). Id consider all "thinking things" as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it's future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

I'm still working on this definition, again just a personal viewpoint.

[–] [email protected] 0 points 8 months ago

Not sure if this is the right answer I'm not familiar with that ecosystem: They have comparisons on their site

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)