this post was submitted on 24 Feb 2024
216 points (95.4% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
54627 readers
565 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My understanding is that all of the codecs we are discussing are deterministic. If you have evidence to the contrary I'd love to see it.
Decoding is deterministic. Encoding depends on the encoder.
The evidence you want to see is literally something you can do or search the Internet yourself. There's thousands of results. CPU is better than a GPU no matter codec you use. This hasn't changed for decades. Here's one of many direct from a software developer.
https://handbrake.fr/docs/en/latest/technical/performance.html
GPU encoders like NVENC run their own algorithms that are optimized for graphics cards. The output it compatible with x265, but the encoder is not identical and there are far fewer options to tweak to optimize your video.
The output is orders of magnitude faster but (in my experience) objectively worse, introducing lots of artifacts
This. It sounds really odd to me that the GPU would make what is pretty much math calculations somehow "different" from what the CPU would do.
Every encoder does different math calculations. Different software and different software profiles do different math calculations too.
So the GPU encoding isn't using the GPU cores. It's using separate fixed hardware. It supports way less operations than a CPU does. They're not running the same code.
But even if you did compare GPU cores to CPU cores, they're not the same. GPUs also have a different set of operations from a CPU, because they're designed for different things. GPUs have a bunch of "cores" bundled under one control unit. They all do the exact same operation at the same time, and have significantly less capability beyond that. Code that diverges a lot, especially if there's not an easy way to restructure data so all 32 cores under a control unit* branch the same way, can pretty easily not benefit from that capability.
As architectures get more complex, GPUs are adding things that there aren't great analogues for in a CPU yet, and CPUs have more options to work with (smaller) sets of the same operation on multiple data points, but at the end of the day, the answer to your question is that they aren't doing the same math, and because of the limitations of the kind of math GPUs are best at, no one is super incentivized to try to get a software solution that leverages GPU core acceleration.
*last I checked, that's what a warp on nvidia cards was. It could change if there's a reason to.
GPU encoders basically all run at the equivalent of "fast" or "veryfast" CPU encoder settings.
Most high quality, low size encodes are run at "slow" or "veryslow" or "placebo" CPU encoder settings, with a lot of the parameters that aren't tunable on GPU encoders set to specific tunings depending on the content type.
NVENC has a slow preset:
https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html#command-line-for-latency-tolerant-high-quality-transcoding
As they expand the NVENC options that are exposed on the command line, is it getting closer to CPU-encoding level of quality?