So I have tried out “Image to 3D Model” on makerlab, and have come to the conclusion as it stands, it is not useful for 3d printing. I have also tried the base technology at trip3d, and it has the same issue. The main issue is the resultant 3d mesh is not detailed enough to come out decent when 3d printed. The tool itself tripo3d is geared towards making assets for 3d games, so the 3d models only look great when textured.
I have searched for an easy way to apply the texture as a displacement map into the mesh, but there doesn’t seem to be a simple way to do this.
I talked to tripo3d on their discord, and they acknowledged the issue, and said currently unless you have a full color 3d printer, we are out of luck. Note full-color printers are not the same as AMS, but are $30k+ machines.
Thoughts?
1 Like
I think its more useful for beginners I guess… My wife loves Image to Keychain
1 Like
yeah might be good for people making simple designs like a tesla truck or a brick, but most 3d scan programs are more work to cleanup than they are worth
Without the texture, I would suspect a brick would actually come out looking like a soft bar of butter instead of a brick.
1 Like
If you want a base that is easy to model, that’s more than fine, then with blender you can work as you want, also importing the texture. It’s just a question of modeling skills. Of course you will never have a finished product
I did quite a bit of ‘3D scanning’ be it with laser, images or lidar.
You need enough information to get an acceptable scanning result…
And you need a matching depth info for every pixel created in the mesh.
Image to 3D model works a bit like creating a relief image for a milling machine.
Based on the image at hand the depth translates to the brightness of the pixel.
Quite bit of work even today…
Problem with doing this as 3D model is that there is no usable information at all.
The AI has to do the guesswork.
What Bambu uses is pretty much the same game programmers used in the early days to create their environments and characters.
You create a most basic and rather blocky 3D model and make it look great by overlaying higher resolution images as a texture.
Take a mountain range in the distance…
Your eyes are incapable of judging the judging the distance differences of those peaks and valleys you see.
It is your brain filling in the blanks based on experience.
If it wide and open at ground level but narrow and high up in the far distance it just has to be a valley of sorts…
And just based on logic those peaks are not just high but also at a further distance than the bottom parts of the mountain.
All you need in 3D is some triangles resulting in a crude interpretation of the mountain range.
And even printed it would not look too bad…
But to get those details you need a way to tell the slicer that there IS a difference.
Bambu’s AI tries to save computing power and prefers to match the texture to the crappy model.
Quite a stupid way of doing it considering they promote this tools for 3D printing, which simply isn’t possible.
Getting a 3D model of a car is possible in many ways, even with a later model Iphone…
But place your plain coloured sedan in a clean parking lot and take pic with just the car and ground around in the frame.
No tree, no people, no obstructions.
For YOU it is clear that it is sedan in the pic…
Now try Bambu to make it a 3D model and see what happens to your nice car LOOOOL
Trying a face to get a print of your head is certain to give you a hilarious print but not one looking like you…
It’s a gimmick to lure in people that won’t work as advertised…
Like so many things Bambu…
1 Like
I concur. I tried it with a very simple image and it came out unusable.
I’ve tried the Image to 3d & though the image is highly detailed the GLB & STL file are seriously lacking the details. I’ve seen low poly models that have more details than the Image to STL makes… I had high hopes, but, again, I am sadly disappointed!