Importing and printing from Scanner

Does anyone have advice or procedures for printing from files captured from 3D scanning software such as Scaniverse? I created a very nice 3d model of a sneaker and attempted to export as an .stl to Bambu slicer and Orca. However, the result was rough! Am I missing a step or several?
Thanks

Did you perform any post scan processing? It’s likely your scan has nice looking texture or color but the actual mesh is probably pretty bad. What does the mesh look like without the color or texture?

No! I attempted to import it directly to the Bambu Studio. I am a complete novice. I was just about to open it in Fusion 360 and see what I could discover there, but again, I’m not competent, yet.
Thanks

1 Like

I can tell you that scanned models in my own experiments with Photogrammetry rarely work without extensive manual editing to fix the holes in the open mesh manifold. In other words, you’re not doing anything wrong but your expectations may be too high. This Sh*t don’t work yet. The technology is too half-baked in it’s current stage from my experience. Even Microsoft abandoned the tech back in 2019 with their XBox Kinect technology. The science and engineering hasn’t evolved enough for a user friendly fool-proof experience despite the fact that some of the software costs hundreds of dollars.

I’ll wager that t he problem is that you have an open manifold in your mesh. In other words, your enclosed solid has missing polygons in what’s supposed to be a “sealed object”. This will force the slicer to “guess” at where the nozzle should move and will do it’s best to extrapolate.

You could try bringing the model into an alternative file translator that are available all over the Internet and then re-export as an OBJ file. OBJ is also a mesh format that the slicer understand but you might get lucky and the manifold may get repaired by another program’s algorithm.

You mention you’re using Fusion 360. You can scale the model there just a tad and then reexport as an OBJ or STL. Also, try to edit the model in Mesh mode.

Alternatively, you can take the STL and repair it using the Windows 3Dbuilder utility that is baked into windows.

1 Like

Thanks for the advice! That’s going to keep me busy! I appreciate the visuals and will give it a try.

BUSY??? LOL. If you haven’t done this before, you’re in for quite the adventure. Oddly enough, my very first Photogrammetry experiment was scanning a simple boot not too unlike the shoe you’re trying to scan. In my case, I used a Nikon camera on a lazy Susan that I fabricated. I have a better turntable now on Printables that uses an empty spool.

That experiment was over two years ago and I was trying to see if I could import into Blender which is as above Slicer technology as Microsoft Word is above Notepad. That exorcise cost me the better part of a weekend and it forced me to conclude that the tech just isn’t there. So be warned that you may be going on a wild goose chase. But if you succeed, please share how you did it.

Right now, I am experimenting with an old Microsoft Kinect scanner after seeing this video. My results suck!!! And BTW, so does this video. He didn’t really ever create a decent model of his wife that had any clarity. So if you look at his example, it will set expectations as to how far you can take the current technology. BTW: All of his free links now are paid products that were once free.

The video shows a great example of broken mesh in this screengrab. If you shoes has this kind of defect in the mesh, don’t expect great results. Also, look at how low a resolution that real world objects can achieve in this example.

The Kinect doesn’t have good enough resolution. I went down this path over a decade ago with ReconstructMe and a Kinect for PC.

My avatar image is a picture of a 1" high SLA print of a Kinect scan of my head sitting in the post-print UV cure chamber. Set the scanner up on my desk, sitting on top of my monitor, and then I swiveled around 360º in my desk chair. But if you didn’t know it was me, you’d be hard pressed to guess it was me. So I abandoned this approach.

There are quite a few much better and inexpensive structured light scanners on the market now that significantly outperform Kinect.

Image on the left, a FDM print of my head using Kinect. Image on the right, a “snapshot” scan (just one image, no actual “scanning”) taken with a Revopoint POP3 scanner. As you can see, the increase in resolution is significant.

3 Likes

Yes, this is all new to me, however, I bought the Bambu printer last month to inspire some creativity. However, I want to get away from printing “door stops” , etc. I read this article on Cnet and I’m using my iPhone 15 Pro Max( Have a turntable but its not required with the app I’m using) and Scaniverse App because it was free and for now free is good until I now what I’m doing. The actual scan was easy and looked very good, however, it will apparently need a lot of clean up to get it 3D print ready. I appreciate the insight and the tuition. I’m ignorant, but I can learn.


Thanks Again!

1 Like

I admire your choice of using photogrammetry as an intro to 3D Printing creativity. As @RocketSled has indicated, the tech has improved and I did have my eye on buying a 3D scanner. There’s a bunch under $400 on Amazon but they get mixed reviews on YouTube.

However, if you want to truly immerse yourself into this aspect of the tech, might I suggest first perusing some of the Cell Phone Photogrammetry videos on YouTube. The sole reason I suggest this as a first step is that it will show you the shallow end of the 3D scanning gene pool. That experience will be worst-case scenario but it will teach you a lot about what you like versus what you don’t like.

I tried building a semi-automated scanner using an Arduino and an old Droid. The challenge is that one really has to painstakingly get as many scans from as many angles as possible in order to give the software enough imagery to construct a proper mesh. The best I could do required over 256 images and I still had to painstakingly go into a mesh editor and patch the holes. The other challenge is controlling lighting. You have to have enough shadow-less lighting while preserving contrast AND also providing enough background structure so that the scanner can distinguish between object and background. Newer scanners state this can be done better than using a camera but when you see their examples, I’m not so convinced that the margin of improvement is that much better.

Yeah, unfortunately, Photogrammetry is better suited to large objects, things scanners would never be able to handle (there are a few packages out there that are designed to produce 3D models from pictures taken with a drone, something I intend to try out in the spring, since I fly drones). And the models produced generally require a fair amount of tweaking and post processing.

But there are many packages that can be used for free and everyone has a camera so it’s a great option if you can’t afford an actual scanner (or you want to make a model of something large like your house).

I had just seen a story on the “best” photogrammetry software for 2023. Went looking for it and there are many reviews like this one. Worth checking out a few of these reviews because you will get a good feel for what each package is better or less well suited doing. Some are easy to use but limited in capability. Some are harder to use but can perform a lot more of the processing you’d otherwise need to do by hand. And there are variations in the resolution of the scans they produce…

Photogrammetry Software: Top Choices for All Levels - 3Dnatives!

1 Like

Thanks for the link. I recall this site now. It was AliceVision Meshroom that I wasted a year one weekend on. It is so, still in the science-project phase. But hey, it’s free and open source.

You can tell when a software product is a labor of love made by a non-programmer… it’s written in Python. Don’t get me wrong, I use Python on my Raspberry Pi projects but it’s the heir to BASIC which is where I cut my teeth on all the way back in 7th grade. When I got my first programmer job out of college, nobody took a piece of software written in BASIC seriously. Like then, Python is the programming language for non-programmers. Serious desktop level work is done in C, C++ or perhaps C#. Meshroom is a noble effort but it is as stable as container ship with an empty hold and all the containers on deck. It crashed on me more times than not which is part of the reason I burned through so much time trying out Photogrammetry. It really left a very bad taste in my mouth overall.

There are quite a few programmers who’d argue with you on that Python classification. But I’m not one of them. I hate it, because white space matters. I want to format my source code my way, but Python programs don’t work if you don’t indent things exactly the right way. I think that doing the language syntax this way was a terrible design decision. Making Python worse than LISP. Or FORTH. Even worse than PASCAL if you ask me… :slight_smile:

1 Like

That is all good mate. We all start from somewhere. I am definitely a novice too :-). Asking and getting the right help is definitely a good start ;-).