this post was submitted on 18 Mar 2024
1 points (100.0% liked)

TechTakes

1432 readers
16 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Feel like you want to sneer about something but you don't quite have a snappy post in you? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 8 months ago (3 children)

Vernor Vinge, patron saint of the singularity and noted winner of a couple of libertarian fiction awards, has passed. HN wanted a black bar[1] but were denied. They compensated by posting a lot of bad takes.

Submission: https://news.ycombinator.com/item?id=39775304

'jart (aka Justine) has this to say about the impending Singularity (feat. Trump and Twitter!):

https://news.ycombinator.com/item?id=39777929

This hackernews wishes he has been frozen, to be thawed in the future (for some reason, no-one expects the future to be Idiocracy):

https://news.ycombinator.com/item?id=39776217


[1] when a notable person in CS dies there's sometimes a black bar under the HN header

[–] [email protected] 0 points 8 months ago

The replies to somebody aggressively (and downvoted) pointing out some of the flaws in Jarts post are bad. Damn.

an LLM cannot be used to create a better LLM

By that logic most humans are also not intelligent.

No you dweeb, they are talking about model collapse, that thing what happens to this 90's tech.

Oh, it doesn't work? That's because IT'S NOT INTELLIGENT.

Ok, let's run this test of "real intelligence" on you. We eagerly await to see your model. Should be a piece of cake.

This is both a weird adhom and a god is hiding in the gaps style argument. (While I have some sympathy for this Peter Watts style argument it is incredibly weak (their post history (8) is more of this very weak stuff)).

[–] [email protected] 0 points 8 months ago (3 children)

Ah damn, Justine has my respect for a bunch of cool and interesting stuff but this is just embarrassing, unless I'm missing some extremely dry satire.

[–] [email protected] 0 points 8 months ago (1 children)

Justine Tunney, the literal neoreactionary?

[–] [email protected] 0 points 8 months ago (1 children)

I had been mercifully spared from her nontechnical opinions until now.

[–] [email protected] 0 points 8 months ago (1 children)
[–] [email protected] 0 points 8 months ago

Yeah I can tell. So much for that respect.

[–] [email protected] 0 points 8 months ago

@bitofhope @gerikson

She wants/wanted the US government to be replaced with a Silicon Valley CEO.

[–] [email protected] 0 points 8 months ago (1 children)

That’s how she’s got a lot of people’s respect, with them being unaware of the other shit. There’s some history going way back to OWS, recommendations that people read Moldbug, and other off-colour shit. I don’t have a link handy immediately but it shouldn’t be too hard to find

[–] [email protected] 0 points 8 months ago (1 children)
[–] [email protected] 0 points 8 months ago (1 children)

Fuck, that's disappointing. I remember being quite impressed with her "a truly universal binary execution format" blog post detailing a way to compile c code into a sort of self-executing portable archive (unless I'm mixing things up - she is the person running justine.lol, is she not?).

Thanks for sharing those articles.

[–] [email protected] 0 points 8 months ago

oh she's very good technically! also a Nazi

[–] [email protected] 0 points 8 months ago (2 children)

fucking christ. it takes a lot to fuck up my day, but a quick scroll through that thread seeing how quick these vultures (including one notable person who’s the reason why I’m ashamed to talk about my lambda calculus projects) are trying to capitalize on Vernor’s legacy is absolutely doing it

HN wanted a black bar[1] but were denied.

why in the fuck? is the famous sci-fi author with a heavy CS background not notable enough for the standards of the site whose creator is a much less notable self-help author whose CS background is failing to make a working Lisp 3 times and writing programming textbooks nobody reads?

[–] [email protected] 0 points 8 months ago (1 children)

What made me mad was them referring to the Deep* books as "hard SF". Arguable A Deepness... could be as it's set in the Slow Zone so FTL travel is impossible, but A Fire... is classic space opera.

[–] [email protected] 0 points 8 months ago

right? it’s a weird combination of these folks never engaging with the work they pretend to celebrate and trying to pretend that their AI fantasy will turn real life into a space opera. it’s fucking awful

[–] [email protected] 0 points 8 months ago (1 children)

can you please talk more about your lambda calculus projects?

[–] [email protected] 0 points 8 months ago (2 children)

sure! there was a little bit about it in the first stubsack and I posted a bit more about it in this thread on masto (with some links to papers I’ve been reading too, if you’d like to dig into the details on anything)

overall what I’m working on is a hardware implementation of a Krivine machine, which uses Tromp’s prefix code bitstream representation of binary lambda calculus as its machine language and his monadic IO model to establish a runtime environment. it isn’t likely to be a very efficient machine by anyone’s standard, but I really like working with BLC as a pure (and powerful) form of computational math, and there’s something pleasant about the way it reduces down to a HDL representation (via the Amaranth HDL in this case). there’s a few subprojects I’ve been working on as part of this:

  • the basic HDL implementation targeting open source FPGA synthesis and simulation
  • a hardware closure allocator and garbage collector
  • an assembler to convert lambda calculus expressions into their binary form (which starts to resemble ML with a bunch of high level capabilities, with very little code either in the assembler or in ROM on the device — that’s one part of what makes the work interesting)
  • a lazy version (Krivine machines are call-by-name, which is almost there, and the missing pieces needed for lazy evaluation look a lot like a processor cache but with more structure)
  • I have the intuition that the complete Krivine machine will be fairly light on FPGA resources, so I’d like to see how many I can synthesize onto one core with parallelism primitives, FIFOs, and routing included
  • lambda calculus machines can do arithmetic and high-level logic without an ALU, which is neat but extremely inefficient. I have some basic plans sketched up for an arithmetic unit that’d allow for a much more cycle and memory efficient representation of integers and strings, and a way to derive closures from them

I’ve been working on some of this on paper as a sleep aid for a while, but I’m finally starting on what’s feeling like a solid HDL implementation. let me know if you want more details on any of it! some of the more far off stuff is really just a mental sketch, but writing it out will at least help me figure out what ideas still make sense when they’re explained to someone else

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

I have a scattered interest in lambda calculus too so I'd love to follow this project. Tromp's BLC definitely hits a sweet spot of complexity/size when it comes to describing computation in a way that's deeply satisfying.

Have you looked into interaction nets/other optimal beta-reduction schemes (there's a project out there called HVM)? Probably way too high level for now though. I am fascinated by the possibility of these algorithms making church-representations more asymptotically efficient (or even balanced ternary)

[–] [email protected] 0 points 8 months ago (1 children)

I have a scattered interest in lambda calculus too so I’d love to follow this project. Tromp’s BLC definitely hits a sweet spot of complexity/size when it comes to describing computation in a way that’s deeply satisfying.

exactly! it’s such a cool way to write a program, and it’s so much more satisfying than writing assembly for a von Neumann (or any load/store) machine. have you checked out LambdaLisp? it’s one of my inspirations for this project — it’s amazing that you can build a working Lisp interpreter out of BLC, and understanding how that was done taught me so much about Lisp’s relationship with lambda calculus.

I plan to release my HDL as a collaborative project once I’ve got enough done to share out. currently I’ve got the HDL finished for the combinational circuit that makes bitstream BLC processing efficient with word-oriented memory hardware, and I’m doing debugging on the buffer that grabs words from memory and offsets them if they represent a term that isn’t word-aligned (which is a pretty simple circuit so I’m surprised I’ve managed to implement so many bugs). there’s quite a bit left to go! IO is still a sticking point — I know how I want to do it, but I can’t quite imagine how memory and runtime state will look after the machine reads or writes a bit.

Have you looked into interaction nets/other optimal beta-reduction schemes (there’s a project out there called HVM)?

that seems awesome! I really like that it can do auto-parallelization, and I want to check out how it optimizes lambda terms. for now my machine model is a pretty straightforward Krivine machine with some inspiration taken from the Next 700 Krivine Machines paper, which seems likely to yield a machine that can be implemented as circuitry. that paper decomposes Krivine-like machine models down into combinators, which can be seen as opcodes, microinstructions, or (in my case) operations that that need to be performed on memory during a particular machine state.

once I’ve got the basic machine defined, I’d like to come back to something like HVM as a higher performance lambda calculus machine and see what can be adopted. one of their memory invariants in particular (the guarantee that each closure is only used once) maps really well to my mental model of what I imagine a hardware parallel lambda calculus machine would be like

[–] [email protected] 0 points 8 months ago (1 children)

I found LambdaLisp from your mastodon post and was immediately intrigued. I'm going to try and run it to get a better understanding of how the IO system works, and maybe even cook up my own BLC interpreter to run it! The hardware stuff is definitely out of my depth, but this may be a great chance to learn.

[–] [email protected] 0 points 8 months ago

that’s a great idea! the only BLC VMs I know of are written in a very obscure style (Tromp’s especially — his first interpreter was an entry into the International Obfuscated C Code Contest and he only posted the (relatively) unobfuscated one later) and I think there’s plenty of room for something written to be more comprehensible. I’m also not aware of any VM that implements call-cc from Krivine’s original paper, which has interesting applications. and of course, all the Krivine machines I know are relatively slow and very memory-inefficient — but there’s low hanging fruit here that can make things better.

one thing I might take on is implementing a visual krivine machine — something with a GUI that shows its current state and a graph of all the closures in memory. that would be a big boon for my current work, and I might see if I could graft something like that onto the simulation testbench for my HDL implementation.

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago)

for anyone who’s fucking lost reading the above (I can’t blame ya), lambda calculus is the mathematical basis behind functional programming. this is a fun introduction. the only things you can do in lambda calculus are define functions, name variables, and apply functions to other functions or variables (which substitutes the variables for whatever they’re being applied to and eliminates the function). that’s all you need to represent every possible computer program, which is amazing

a Krivine machine is a machine for doing what the alligators in that intro are doing, automatically — that is, reducing down lambda functions until they can’t be reduced anymore and produce a final value. that process is computation, so a Krivine machine is a (rather strange) computer