Is Vision Encoder worth? Is there any big difference if you use it?

The vision encoder plate is not going to calibrate flow or other extrusion related issues.

It only improves the position accuracy of the motion system.

That your print quality is better than before is due to something else that has changed in that period.

1 Like

Why only the mini? Holy cheapskates robin :wink:

I bought an encoder the day it came out.

@StreetSports I hate to be the one to break the illusion for you, but he and our mutual friend Hank are cut from the same cloth, if you know what I mean. :wink: Well, not exactly the same cloth, but close enough that there can be no doubt.

1 Like

I did not realize that… I’m going to check when I go home and then re-run the calibration!

Yea at least Prusa gives you gummi bears!

For me the vision encoder did improve accuracy in the manner that the difference between x and y axis has decreased. It was small, but after calibration it’s even smaller.

I started printing the shrinking test by Alex-vG.

The average outside dimension in X: 149,82 so, 0,18mm too small.
The average outside dimension in Y: 149,92 so, 0,08mm too small.
The average inside dimension in X: 139.85 so, 0,15mm too small.
The average inside dimension in Y: 140,13 so, 0,13mm too large.
(This was the result after a first round of shrinkage compensation tuning with PLA).
So this is fairly accurate, and for most applications perfectly ok. However, I found that the Y axis consistently deliverd somewhat larger dimensions (between 0,1 and 0,2 mm on a dimension of approx 150mm) then X axis. I was curious whether the vision encoder calibration would be able to reduce that.

After vision encoder calibration I reprinted the exact same g-code, with same material from same spool. Results:
The average outside dimension in X: 149,77 so, 0,23mm too small.
The average outside dimension in Y: 149,77 so, 0,23mm too small.
The average inside dimension in X: 139.96 so, 0,04mm too small.
The average inside dimension in Y: 139.98 so, 0,02mm too small.

Based on these results, my conclusion is that the encoder was able to correct the mismatch between size in X direction and Y direction. The remaining mismatch between design and actual size can be tuned further with shrinkage setting. After that I had the following results:
The average outside dimension in X: 149,91 so, 0,09mm too small.
The average outside dimension in Y: 149,91 so, 0,09mm too small.
The average inside dimension in X: 140.05 so, 0,05mm too large.
The average inside dimension in Y: 140.09 so, 0,09mm too small.

I would call that perfectly accurate for a FDM printing process.

3 Likes

I appreciate the results but I think your testing starts off with shrink tests first not default settings so the results seem a little skewed unless I am not understanding correctly. Normally after fdm printing by default the part comes out too big not too small.

For example anytime I want to make a hole for a 6x3 magnet I need to design it as 6.15 wide x 3.1 deep. I’ve never had to compensate for a part coming out smaller because of the layer squish. If I print out a 10mm cube it will come out 10.1 or 10.05.

You must have a lot of shrinkage compensation going on

1 Like

It’s actually the other way around. Before doing any shrinkage compensation, the parts were even smaller. 0,25mm on a straight dimension of 150mm - after full cool down of course.
I think shrinkage compensation should be done on large parts.
Nevertheless, each machine and material and even environmental conditions will deliver different accuracy deviations. My point was actually that before using the vision encoder, I had differences between 150mm printed in X-direction and 150mm printed in Y-direction. After using the vision encoder, those dimensions were at least virtually the same. So I have at least a rotational symmetric part. Making parts fit therefor becomes easier, the remaining inaccuracy can be taken care of in clearance design.

1 Like

What I don’t understand is why people always think that calibration by makes their parts dimensionally accurate.
That thing only calibrates the motion system.

Only through shrinkage can truly get dimensional accuracy.

1 Like

I agree we do see lots of posts where they want every dimension to be accurate with a movements calibration, but you do need both calibration of movements and shrinkage to make something accurate. Otherwise your adjusting your shrinkage to a specific part size. You could get a part to be dimensionally accurate with shrinkage alone and it would work fine if your printers movements are accurate. However if the machine’s movements are not accurate your part may not scale properly. Same goes for movements.

Back years ago everyone was chasseing accuracy for parts by adjusting stepper motor values. It worked pretty good for a specific part with a specific filament. But as soon as you changed either one accuracy was way worse and didn’t scale properly at all.

These new printers come really accurate from the factory so shrinkage is usually your biggest factor. That being said, I did notice a difference after using my vision encoder and I like knowing I’m adjusting my shrinkage starting with an accurate printer in regards to movements. Assuming the vision encoder and camera system is working properly.

I think Bambu Labs is just capitalizing on a gimmick. Even 00.1 increase in ā€œprecisionā€ and its a win, full sell. I don’t blame them…

1 Like

All you have to do is read the Vision Encoder sales pitch page to see why people may expect dimensional accuracy improvement with it.

1 Like

OK, here’s the graph taken from their sales pitch page:

The way I read this isn’t that it guarantees dimensional accuracy in your printed part. How could it? Part of that will inevitably depend on how good your shrinkage compensation number is for the particular filament you’re using. Instead, it’s saying you haven’t got even a prayer of dimensional accuracy unless you use it to remove the inaccuracy from your positioning. For instance, without the vision encoder, it predicts that if you send the nozzle to position (300,300) on your build plate, then X and/or Y of where the center of your nozzle is versus where its should theoretically be will likely be off by between 0.35mm and 0.45mm. On the other hand, the promise it’s making is that if you use the vision encoder plate to calibrate, it will be off by at most 0.05mm.

That’s all it’s promising. From that improved starting point, the onus is still on you to correctly find and use the proper shrinkage compensation number for the filament you’re using.

On the other hand, if you’re not dialing in any shrinkage compensation number at all for your filament, then that shrinkage error will likely be much larger than the improvement that the vision encoder calibration could produce just by more accurately positioning your nozzle. So, if you’re not taking shrinkage compensation seriously, then there’s an argument for not wasting money on the vision encoder. For the best possible dimensional accuracy, you need both.

But it goes even further than that. Without first calibrating using the vision encoder, how confident can you be of your shrinkage compensation number? You can’t. How could you know how much to attribute to shrinkage versus positional inaccuracy?

Make sense?

3 Likes

Do these things, enable this vague feature, use the $100.00 vision encoder plate and your 3D printed parts will fit together more smoothly, the larger the parts the better they fit. You can feel better knowing you fine tuned your printer like a pro and you are now no longer a hobbyist but a true engineer. Congratulations! :wink: :sweat_smile:

1 Like

:man_facepalming: Well, there’s always this guy:

He makes nearly every conceivable error in his review of the vision encoder plate, yet he’s confident in his review anyway. But that’s OK, because with his approach to 3D printing he’s correct in his conclusion that it would do him no good.

1 Like

I saw that video and agree with you. Perfect example of how not every YouTube channel can be a reliable source of information.

1 Like

Both him and ChatGPT didn’t recognize that the difference between X and Y were significantly reduced. That all dimensions were undersized says something about under extrusion and/or a false shrinkage factor.

Also would have been good if he would have checked skew.

Nothing to add here.

1 Like

A better demo would have been nested squares, something like:


Then, if the sales pitch were correct, he would have seen growing inaccuracy in the dimensions as he progressed from the smallest square to the largest square, even after accounting for shrinkage.

bro might be rage baiting, just to have more engagements.

I usually just go in and leave a thumb down instead.

1 Like