this post was submitted on 16 Feb 2024
997 points (99.3% liked)

linuxmemes

20751 readers
1194 users here now

I use Arch btw


Sister communities:

Community rules

  1. Follow the site-wide rules and code of conduct
  2. Be civil
  3. Post Linux-related content
  4. No recent reposts

Please report posts and comments that break these rules!

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 7 months ago (2 children)

Is there a perfect scheduler that is non-optimal in the Big(O) sense but is optimal if you're looking at maximizing hardware utilization? In other words, scheduler that takes a long time to determine CPU utilization for each process, but provides an optimal total CPU utilization? I realize that it would not be ideal since we'd essentially have these "sudden stops" as it recalculates the schedule. I'm just more interested in the theory.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

If you have a fixed collection of processes to run on a single processor and unlimited time to schedule them in, you can always brute force all permutations of the processes and then pick whichever permutation maximizes and/or minimizes whatever property you like. The problem with this approach is that it has awful time complexity.

Edit: There's probably other subtle issues that can arise, like I/O interrupts and other weird events fwiw.

[–] [email protected] 3 points 7 months ago (1 children)

How would you deal with iowaits in a system like that? I can perfectly burn 100% of CPU time running a poll(), but that’s not useful work…

[–] [email protected] 1 points 7 months ago

¯\_(ツ)_/¯

I guess that's why I asked. I'm just curious if it's even possible.