Do you have sources on the “burn out” phenomenon?
As far as I’m aware there’s stress buildup from thermal cycles and overheating - but now burn out from keeping the hardware at, let’s say 95C.
Do you have sources on the “burn out” phenomenon?
As far as I’m aware there’s stress buildup from thermal cycles and overheating - but now burn out from keeping the hardware at, let’s say 95C.
You’re in violation of the 84’ treaty on account of illegal thoughts.
It even has integrated graphics - so throw out that GPU, I have my server with a 6700k pull less than 20W at idle!
We have the four freedoms that guarantee the free movement of goods, capital, services, and people as part of the European single market.
Good looking UI (designs) and good UX is not the same!
Apple is known for doing both relatively good (especially on the first iphone).
However personally I still dislike the Apple UI (the macos dock eats too much screen space, ios close all where?, ios back gesture,… for example) and UX (the system actively tries to prevent me from doing certain things). I mean, in the end, there often are keybindings that do the job, but those are harder to learn the the emacs keybindings imo.
Include computercraft and you can set up a connection back to the real world!
To be honest, I switched to Wayland years ago precisely because of the better perceived input/cursor experience.
Change my mind, but having an average of half a frame input latency is much preferred when in return I gain that the cursor position on the screen actually aligns with all the other content displayed.
Plus, I’m very sensitive to tearing, so whenever it happens I get the impression that there was a huge rendering error.
Well and on the note that the cursor might visibly stutter, sure. But it’s a bit misleading. A game pinning the GPU to 100 % and running on 5 FPS doesn’t mean that your cursor will be rendered with 5 FPS. So far I’ve only noticed cursor lag/stutters in OOM situations, but neither under heavy GPU or CPU load.
Dionaea muscipula
I was in a building that was rebuild after a fighter jet crashed into the one before it…
Or about half a year if we’re only counting the time during which I’ve been alive.
13.787 ± 0.020 billion years
Why does look like another bot post?
The simlutation terminates.
I’m curious, how do you run the 4x3090s? The FE Cards would be 4x3=12 PCIe slots and 4x16=64 PCIe lanes… Did you nvlink them? What about transient power spikes? Any clock or even VBIOS mods?
I’m also on p2p 2x3090 with 48GB of VRAM. Honestly it’s a nice experience, but still somewhat limiting…
I’m currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, …
Maybe I’ll drop down to a 35b model to get more context and a bit of speed. But I don’t really want to justify the possible decrease in answer quality.
I’m running such a setup!
This is my nixos config, though feel free to ignore it, since it’s optmized for me and not others.
How did I achieve your described setup?
The added info from pv
is also nice ^^
To personalize your setup, is to deviate from the default config to better match your preferences - whatever those may be, however over the top those may be.
That doesn’t imply any optimization, unless it’s what you personally prefer.
I have yet to see a personalized setup with blurred window borders xD
Shouldn’t everyone that installed Arch the right way be able to do it on most distros, simply after installing Pacman?
Though I think changing (shrink, create new, migrate, delete old) the partition layout would count as installing another distro on top…
Want a challange? Start with something like Silverblue.