As pointed out by one of the youtubers (I’m forgetting now which one), the first time you run the visual calibration plate, you are provided with results which some the improvement in accuracy after the calibration vs. beforehand. You would think that the improved calibration would then become the new baseline, and so if you were to run a second calibration immediately after the first, there would be little to no improvement. However, that doesn’t happen. Instead, the reported improvement results for the second round of improvement seems to be comparing the second calibration against no calibration at all.
Is that what it’s doing, or is there something else in play that would explain the larger than expected “improvement” after the second calibration?
Round one gives positive and negative values around the set flow ratio.
Round two only goes from zero downwards.
Means you pick from the first batch what looks the best and has a tiny bit of over-extrusion.
Don’t go for the number that looks a bit too thin on the infill lines
Second round will then use that value and provides patches with a LOWER flow ration.
Problems always start when people take the Bambu instructions literally…
Then you end with a good looking first patch that might already be wrong one to pick…
I did do the calibration. It says „completed“ but I did not see any corrections? Would be interesting.
I printed some special test parts which I will analyze optically. I did print the part before and after calibration and I‘m curious to find out the difference. The method is very accurate and can also be used to measure accuracy of sheet metal parts.
I’m currently running the beta firmware, but it provided the same type of summary information when I ran it with the official release firmware.
I guess somehow it retains a memory of how things measured prior to any calibration at all, and so it compares each new calibration to that memory? That seems to be the literal interpretation of what it is saying. i.e. rather that using each new calibration as the baseline and comparing each new calibration against the most recent prior calibration. Or maybe each new calibration first flushes the prior calibration, and so every time it’s as though it was the first time. I’m guessing maybe this.
Whatever it is doing, it seems the results are highly repeatable!
The simple way to think of it is that with the video encoder calibration, the center of the nozzle will get placed with an accuracy of less than or equal to 30 micron of where it theoretically should be. If shrinkage happens after that, it’s not the fault of the vision encoder calibration. A recent video by Aurora Tech seemed to confuse the two.
It seems like your best hope for getting that would be starting with the vision encoder plate then, as without it the results indicate mine would have been out by as much as 0.306mm.
I’ve had good luck dialing in the shrinkage compensation pretty tightly after two or three iterations. Well enough to meet your plus/minus 0.1mm requirement.
The vision encoder calibration by default is run at ambient temperature. I wonder if the belts would expand enough under higher chamber temperature that we should be running the vision encoder calibration at working temperature instead?
The bigger question is what are the best tests we can run to confirm that the vision encoder calibration is working correctly and delivering the promised results rather than just rely on it to tell us whether it is or not. i.e. how do we know whether it’s telling the truth?
This is a conundrum. If we heat the chamber to working temperature, then the vision encoder plate itself is going to expand and no longer be accurate. I guess we’d need different ones that are calibrated to be accurate at different temperatures, or else compensate using measured temperature and a known co-efficient of thermal expansion?
Yes, thats why I bought it. The larger the parts you print, the more pronounced the errors will be. How did you compensate for shrinkage? So far I never did that; if I needed accurate prints I did 2 iterations print, measure, correct and reprint.
We do have one in the company I work for, so maybe yes.
What would be really cool is that if bambu used some of the technology that they used for the high accuracy nozzle offset calibration and made a test that would then adjust filament profiles for dimensional accuracy.