this post was submitted on 14 May 2024
1361 points (99.1% liked)

Programmer Humor

32469 readers
442 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 165 points 6 months ago (4 children)

Well, do you have dedicated JSON hardware?

[–] [email protected] 15 points 6 months ago
[–] [email protected] 8 points 6 months ago

There were XML DOM accelerators for a while. Might still be out there.

[–] [email protected] 106 points 6 months ago (2 children)

Please no, don't subsidize anything Java-Script. It will only make it less efficient.

[–] [email protected] 61 points 6 months ago (5 children)

And thus JsPU was born from Lemmy

[–] [email protected] 3 points 6 months ago

I'll take 10. Here is a picture of a goldfish as payment.

[–] [email protected] 2 points 6 months ago

I love the idea that we could need a dedicated chip to parse web pages 😂

[–] [email protected] 7 points 6 months ago (1 children)
[–] [email protected] 4 points 6 months ago

Slap a liquid cooler and you're cooking at a high speed 2.08 GHz

[–] [email protected] 21 points 6 months ago (1 children)
[–] [email protected] 7 points 6 months ago

Modern sites: This page requires a JsPU to run which is not present on this system. The website will run in reduced feature mode.

[–] [email protected] 2 points 6 months ago
[–] [email protected] 26 points 6 months ago (3 children)

My thoughts on software in general over the past 20 years. So many programs inefficiently written and in 4th level languages just eats up any CPU/memory gain. (Less soap box and more of a curious what if to how fast things would be if we still wrote highly optimized programs)

[–] [email protected] 19 points 6 months ago

Even if our apps used resources better the adware will just use the free space.

[–] [email protected] 36 points 6 months ago* (last edited 6 months ago)

Answer: there'd be far less software in the world, it would all be more archaic and less useful, and our phones and laptops would just sit at 2% utilization most of the time.

There's an opportunity cost to everything, including fussing over whether that value can be stored as an int instead of a double to save 8 bits of space. High level languages let developers express their feature and business logic faster, with fewer bugs, and much lower ongoing maintenance costs.

[–] [email protected] 11 points 6 months ago

I fully concur. There's tons of really inefficient software out there that wastes resources just because for a long time, available resources grew fast enough to just keep using more of them without the net speed of an application slowing down. If we didn't have so many lazy SW devs, I suspect the reduction in needed CPU cycles would have a measurable positive effect on climate change.

[–] [email protected] 25 points 6 months ago (2 children)
[–] [email protected] 24 points 6 months ago (3 children)

The R in ARM and RISC is a lie.

[–] [email protected] 5 points 6 months ago

Nope it's still a register-register op, that's very much load-store architecture.

It's reduced, not minimalist, otherwise every RISC CPU out there would only have one instruction like decrement and branch if nonzero. RISC-V would not have an extension mechanism. The instruction exists because it makes things faster because you don't have to do manual bit-fiddling over 10 instructions to achieve a thing already-existing ALU logic can do in a single cycle. A thing that isn't even javascript-specific (or terribly relevant to json), it's a specific float to int cast with specific rounding and overflow mode. Would it more palatable to your tastes if the CPU were to do macro-op fusion on 10(!) instructions to get the same result?

[–] [email protected] 15 points 6 months ago* (last edited 6 months ago)

The website title says "Arm Developer", not "ARM Developer", in a clearly non-acronym way so it's a guide for making prosthetic hardware. Of course you want a cyborg arm to parse JS natively, why else even get one?

[–] [email protected] 7 points 6 months ago

Lie starts with L, dummy

[–] [email protected] 5 points 6 months ago (1 children)

At this point ARM is a CISC architecture

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago) (2 children)

No, that's not what RISC is about. There was some early attempts to keep the number of instructions low--originally, ARM didn't have a multiply instruction, and there's still a bunch of microcontrollers you can buy that don't have a divide instruction--but it was quickly abandoned as it's just not that useful. It only holds back instructions that optimize common cases. Your compiler can implement multiplication by doing addition in a loop, but that's not very efficient.

What really worked about it was keeping a separation between how memory is accessed. You don't have an ADD instruction that can fetch from both registers or main memory. You have a MOV instruction that can fetch from memory into a register, and you have an ADD instruction that can work on registers.

ARM still does this just fine.

[–] [email protected] 2 points 6 months ago

Someone confusing load-store with RISC again.

[–] [email protected] 8 points 6 months ago* (last edited 6 months ago) (1 children)

I'm a computer engineering major (still a student tbf), I'm well aware of the difference between CISC and RISC, I was making a joke.

Also, I understand your point, but you should know though that a load-store architecture and a RISC instruction set are not the same thing. The vast majority of RISC ISAs are load-store, but not all load-store architectures are RISC.

[–] [email protected] 9 points 6 months ago (1 children)

http://www.quadibloc.com/arch/sriscint.htm

The RISC architecture contains several common elements. Some of them are no longer present in most chips that still call themselves RISC:

  • All instructions execute in a single cycle.
  • Floating-point operations, specifically, are therefore excluded.

But most of the defining characteristics of RISC do remain in force:

  • All instructions occupy the same amount of space in memory.
  • Only load, store, and jump instructions directly address memory. Calculations are performed only between operands in registers.

https://groups.google.com/g/comp.arch/c/IZP5KUJprHw?pli=1

MOST RISCs:
3a) Have 1 size of instruction in an instruction stream
3b) And that size is 4 bytes
3c) Have a handful (1-4) addressing modes) (* it is VERY hard to count these things; will discuss later).
3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don't have some of the bad effects)
4b) Have no more than 1 memory-addressed operand per instruction
5a) Do NOT support arbitrary alignment of data for loads/stores
5b) Use an MMU for a data address no more than once per instruction
6a) Have >=5 bits per integer register specifier
6b) Have >= 4 bits per FP register specifier

Note that none of this has to do with reducing the number of instructions, which is what people tend to think of when they hear the name.

[–] [email protected] 2 points 6 months ago

All instructions occupy the same amount of space in memory.

Both ARM and RISC-V have compressed instructions. Dunno how ARM works but with RISC-V the 16-bit instruction set is freely interspersable with the 32 bit one, which also get their alignment reduced to 16 bits. Gets like 95% of the space reduction possible with full variable-width instructions without overcomplicating the insn decoder.

As to addressing and loads and arithmetic: No such instructions, but every CPU but the tiniest ones are expected to do macro-op fusion for things like indexed loads. Here's an overview.

The MMU thing... well the vector extension can do gather/scatter, I guess it could stay within the letter of "use the MMU once" but definitely not the spirit.