Spaghetti detection AI

Would be nice to put the Spaghetti detection on its own so we could turn off that but keep the other AI active.
I get way too many false spaghetti detection warnings. This is really bothersome on long overnight prints. When you check the print in the morning you find it stopped in the middle of the night waiting for you to resume.

2 Likes

I think you can at the device tab at the print options. You can set AI aggessiveness level or disable it completely, but still keep first layer inspections and the other stuff turned on.

The AI to check backups in the filament poop shoot is also included in that AI button. I do keep it on low, that’s why it would be nice to pick exactly what you want the AI to monitor.:grinning:

I thought spaghetti detection was the only AI feature.

I thought so too. And I already had a poop shoot blockade and the printer did not recognize anything there. Well, was my own fault for using a cover over it, never happened again after removing it.

1 Like

I have had the printer pause when the poop chute was blocked.

2 Likes

Agreed. Spaghetti detection is obscenely over sensitive even on low. I have had ONE successful intervention from the AI. Among DOZENS of false flags. I want a notification ONLY mode. no pause. ping my app and I will pause it. While devs are at it LET ME CHANGE THE NOTIFICATION SOUND ON ANDROID.

2 Likes

Apparently there is no solution for this. I now also have a model where spaghetti is recognised. Always in the same place. I have changed supports, it doesn’t help. In my print, it’s mainly a lot of smaller tree supports about 3cm above the printing plate. As soon as the first layers are placed over these supports, I get the warning.

So far my experience is the opposite: it never triggers even when the print has obviously failed.

Does anyone know if the algorithm employed takes into account the particular model or does it look generically for spaghetti pattern? For example, does it take into account the expected cross section or the expected outer perimeter?

From my understanding it looks for stringing. I find for me it fails to detect spaghetti most often when I use dark coloured filament.

1 Like

I see - no wonder it never detects huge blobs stuck to the printer carriage or when a whole part of the print has separated. It’s truly a great printer but, of all the touted things, I would say spaghetti detection has been quite underwhelming.

Agreed that one should be able to choose the corrective action because we all will experience it differently. Because it’s under sensitive for me, I would also want a knob to control/increase sensitivity.

Has this been removed? The sensitivity with which spaghetti is recognised could be set.

If you read the wiki, BL states the spegetti detection is based on the machine learning aspect of AI from the moment you first print and so on further adapting to false detections and improving upon itself.

I can agree this seems to be the case since having the X1C for over a year and at least the first 6 months were false alarm detections. Now I set to medium and never receive a false notice. I’d say at the least I’m having a 90% success rate of actual spegetti/chute alarms. Now it’s been chute more so than the spegetti.

I’m looking forward to seeing a “air print” detection implemented on the X1 series.

Still adjustable:
image

1 Like

My understanding of the Wiki is that if User Experience is checked in Preferences, detection is performed on the printer using a regularly updated AI algorithm downloaded as needed from the Bambu cloud. (Pictures of your model are also uploaded to Bambu to “improve” the algorithm.)

With User Experience disabled, or in LAN-only mode, detection relies primarily on the AI algorithm built into the firmware. (That adds importance to regularly updating the firmware, via cloud.). The printer can also “learn” as you print, but I think it forgets this additional learned knowledge if the printer is turned off, or maybe if it just goes to sleep.

The Wiki says “If you are off the printer for a relatively long time (e.g. a whole night), you can set the sensitivity to low , so it’s less often to pause the printing for small defects.”

To me, that says a printer not in the improvement program or internet connected has learned only from the prints made since it was last turned on, and maybe not even those from yesterday, if it was left on overnight.

1 Like

Thanks, you are right. I was looking in the wrong place.

From the Wiki text, the setup computes the bounding box and then sends that image to the algorithm to detect failure but is not including the expected contour of the model or its cross section during inference. Perhaps it’s doing it during training but that’s unspecified. Meaning that it’s not taking full advantage of the available data for each particular detection and explains why some obvious failures go undetected. Algorithms along these lines might include: