You need a 15$ electrical relay board that sends power from the motherboard to the second PSU or it won't work.
If you are talking about something like the add2psu boards that jump the PS_ON line of the secondary power supply on when the 12v line of the primary one is ready. Then i'm already on it the diy way. Thanks for the heads up though :-).
expect 1-5token per second (really more like 2-3).
5 tokens per seconds would be wonderful compared to what i'm using right now, since it averages at ~ 1,5 tok/s with 13B models. (Koboldcpp through vulkan on a steam deck) My main concerns for upgrading are bigger context/models + trying to speed up prompt processing. But i feel like the last one will also be handicapped by offloading to RAM.
How much vram is the 3060 youre looking at?
I'm looking for the 12GB version. i'm also giving myself space to add another one (most likely through a 1x mining riser) if i manage to save up enough another card in the future to bump it up to 24 gb with parallel processing, though i doubt i'll manage.
Sorry for the wall of text, and thanks for the help.
Is bifurcation necessary because of how CUDA works, or because of bandwidth restraints? Mostly asking because for the secondary card i'll be limited by the x1 link mining risers have (and also because unfortunately both machines lack that capability. :'-) )
Also, if i offload layers to the GPU manually, so that only the context needs to overflow into RAM, will that be less of a slowdown, or will iti comparable to letting model layers into ram? (Sorry for the question bombing, i'm trying understand how mutch i can realistically push the setup before i pull the trigger)