8gb of system ram is enough for a low end system (especially with Linux) and 8gb of vram is enough for 1080p gaming.
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
RAM on phones is ok, though.
What does 1GB of cache look like?
That’s a lot of cache! For a new battery :P
CPU or SSD cache?
CPU
I always thought it would be funny running an os from an usb stick.
Never would I have thought that there would be storage in the size of a stick exceeding the default configuration of a desktop pc.
2 TB in one small nvme drive?! Wtf. Amazing but also crazy.
You should check out Linux live USBs from nearly 2 decades ago then.
When my dad first saw an nvme drive he had to triple check what he was looking at BC in his old 70s computer brain there's no fucking way something so small and unmoving can hold so much data, read/write it so fast, and all for a relatively cheap price.
8GB of Atari 2600 games
Generally there’s a reverse relationship between size and speed. A 8gb cache would also be super slow thus defeating the purpose of the cache. If it were so easy every cpu would have a huge cache
Not really, if you're putting that size on the physical chip it will be fast because it's close by. It's just that we can't fit that much on a chip now.
The first hard drive I got had 20MB and it was glorious.
I had a conspiracy theory that it's trying to communicate with me using morse code, but I was too lazy to learn it
The first one I used was 5MB. The OS on the machine (a CP/M version) didn't know how to handle it, so it was partitioned as lots and lots of floppies. Not very useful.
How about the other way around?
So I can boot up without a disk now?
8GB of (internet) bandwidth.
8GB/s, or 8GB per month.
I have 3gb of VRAM.
I'm on 2 lol
I remember when this applied to 8kB.
I have an 8gb ATA storage drive on my desk… wonder if it stills works
Still remember my first 500MB drive, thought I would never manage to fill it up
I remember being thrilled to move from floppies to a 16mb flash drive for my school assignments, even if I did have to constantly download and reinstall the USB Mass Storage drivers for the Windows 1998 sp2 computers in the library which reset every night. And the transfer speed was SLOW.
The fact that you can get a terabyte flash drive now, which can hold 62,500 of my school assignment drives, is mind blowing to me.
I always wanted the zip drives with 250mb capacity.
dying in 8gb unified ram intensifies
Noone will ever need more than 640k of RAM
- no one
Achshully, you're right
8GB of registers.
What it feels like moving from x86 to ARM
The first computer I bought had eight megs of RAM.
I remember being thrilled with a 20 meg scsi hard drive I got as a kid.
Mine got upgraded to a full meg.
The meme don't make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all...
Slow? Not necessarily.
The main issue with that much memory is the data routing and the physical locality of the memory. Assuming you (somehow) could shrink down the distance from the cache to the registers and could have a wide enough data line/request lines you can have data from such a cache in ~4 cycles (assuming L1 and a hit).
What slows down memory for L2 is the wider address space and slower residence checks. L3 gets a bit slower because of even wider address spaces but also it has to deal with concurrency issues since it's shared among cores. It also ends up being slower because it physically has to be further away from the cores due to it's size.
If you ever look at a CPU die, you'll see that L1 caches are generally tiny and embedded right into the center of the processor. L2 tends to be bolted onto the sides of the physical cores. And L3 tends to be the largest amount of silicon real estate on a CPU package. This is all what contributes to the increasing fetch performance for each layer along with the fact that you have to check the closest layers first (An L3 hit, for example, means that the CPU checked L1 and L2 and failed at both which takes time. So L3 access will always be at least the L1 + L2 times).
I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.
that much cache could be detrimental to the speed of your CPU