there are some generative AI services that can create and export 3D models. These services often generate 3D models in common formats like .OBJ, .STL, or even .STEP files. Some notable examples include:
OpenAI’s Point-E: Generates 3D point cloud models from text prompts.
NVIDIA’s GET3D: Creates 3D meshes from text.
DeepMind’s Alpha3D: Focuses on generating 3D models.
Spline: Generates simple 3D models and exports various formats.
For more complex models, users can further refine them in CAD software.
From the provided description, I’d say Spline is out of the equation, unless someone wants to start simple. Alpha3D appears to be promising, both in term of quality rendering as well as the cost for monthly subscriptions. [Pricing - Alpha3D]. Nvidia’s Get3D seems also promising, and it appears one can get it from Gitlab for on the prem use [GitHub - nv-tlabs/GET3D]. As for Point-E, going through their abstract, I sort of got the feeling that would be wise to keep my distance from it. Nevertheless, for those willing to give it a try, it seems Github offers them the opportunity to do just that [GitHub - openai/point-e: Point cloud diffusion for 3D model synthesis]
Personally, I haven’t tried any of them, but I’m tempted to have a go at Alpha3D. Somehow seems to be more refined and advanced than the rest, but then again, it’s my personal opinion. Feel free to disagree.
I guess what comes to mind are those crafty looking blow molds people buy to put by their door at Halloween as a decoration. Instead of buying one of those “we’ve seen it to death” jack-o-lanterns, if AI could generate something unique and interesting, and maybe incredibly detailed, and then you 3D printed it, well then, bang, you’ve got a non-boring decoration that no-one has ever seen before.
Or maybe it could be tasked with generating something potentially useful like some kind of print-in-place clamp or maybe a functional linear actuator (all the parts, minus the motor).