Im running a 7950x3d and a 4090. Anybody know how to force studio to use more than 7% cpu power? High detail models take a while to slice but my PC is barely working.
Assuming Windows, you can use Task Manager to bump the process priority up to “real time” (although system performance might be kind of choppy if you max it out this way).
It’s probably not going to speed things up a lot, though. I don’t think the slicer is limiting CPU usage. Slicing (particularly for large models) is a IO/memory-bound task. There’s a lot of data that has to be read for the slicing operation to happen. Those bottlenecks limit how much processing the CPUs can do.
How much DRAM do you have, and can you overclock it?
Where’s your page file? On a separate drive from your data drive? Ideally, an NVMe SSD?
64gb of ddr5 @6400 Its barely using memory either.
Samsung 9100 pcie5 pro ssd
Everything is good in the pc. Other apps are able to use it to its fullest. Maybe its only using a single thread or something. Ill check. When you slice a large file, how much if your resources are being used?
I get a nice spike on all cores and then they plummet when the slice gets to 15%. Then its slow going after that. 10 minute slice on large files
Also not sure why studio shows up under Edge in task manager
Those specs ought to do it.
There’s not a lot of detail out there on the Orca slicer’s multithreading. Prusa definitely uses as many cores as it can. But Orca?
I will have to load a big model and slice it to see what my resources are doing. My machine is a few years old (though it was top of the line when I built it), but I’ve never gotten impatient with slicing. Maybe my models aren’t complicated enough.
There’s a Benchy for benchmarking printer speed/quality. You’d think there’d be a “reference model” for benchmarking slicer performance. But I’ve never seen one.
Im making the tpu shoes. Normally the slicing time is ok, but these shoes take forever. Ive had a few models take 20 minutes or long just because of triangle count.
I changed it to Real Time, but no change.
I’ve wondered the same thing. Just to confirm — you’re asking if your RTX 4090 can accelerate the slicing process in Bambu Studio or PrusaSlicer, right?
I researched this a while back and was surprised to learn that despite how logical it seems to offload slicing to a GPU, that’s not how these slicers are built. They’re entirely CPU-bound.
Both Bambu Studio and PrusaSlicer are based on the legacy Slic3r codebase, which uses sequential C++ algorithms with no provisions for GPU acceleration. The core operations — model triangulation, layer generation, path planning, infill, and support structures — rely on complex, branching logic and recursive geometry operations like constructive solid geometry (CSG) and boolean mesh evaluation. These are not easily parallelized in a way that maps cleanly to CUDA.
CUDA excels at highly parallel, data-independent workloads — matrix math, image filters, deep learning inference — but slicing is algorithmically irregular and serial. Most steps depend on the previous ones, which kills GPU throughput.
I even looked into whether CUDA support could be patched in for my 3080 Ti (12GB). It would require a full re-architecture of the mesh slicing and path generation engines, probably with a redesigned data model optimized for parallel execution. Realistically, that’s only feasible if you’re building a slicer from scratch and I might quickly add “Well beyond my rusty coding skills and current technical acumen”. But one can always dream, right?
So yes, all that GPU horsepower just sits idle while your CPU does the heavy lifting. It’s a software architecture limitation, not a hardware one.
To illustrate, I use a benchmark model that takes up to five minutes to slice. On my i7-12700K with an RTX 3080 Ti, you can clearly see during the slicing process that the GPU is barely touched. The ~20% usage shown here is from background tasks and my 8K monitor — during slicing, GPU usage typically drops to 3%.
I also explored whether recompiling the source with CUDA libraries would help — hoping for some magical compiler flag. No dice. I even asked ChatGPT, and here’s what it had to say:
You cannot just recompile Bambu Studio with CUDA support. You’d need to:
- Fork the slicer
- Replace key parts of the engine with GPU-parallel equivalents
- Manage device memory, kernel invocations, and sync issues manually
This is a research-level project, not a simple mod or build flag.
Imagine my disappointment at learning this unfortunate fact.
Its mind boggling. I figured out its really only using 2 threads. What a waste