Welcome to the Onshape forum! Ask questions and join in the discussions about everything Onshape.
First time visiting? Here are some places to start:- Looking for a certain topic? Check out the categories filter or use Search (upper right).
- Need support? Ask a question to our Community Support category.
- Please submit support tickets for bugs but you can request improvements in the Product Feedback category.
- Be respectful, on topic and if you see a problem, Flag it.
If you would like to contact our Community Manager personally, feel free to send a private message or an email.
Answers
The triangles could have been generated by sampling a surface ... which is usually called tessellation.
They could have been generated programmically by something like an openGL shader.
They could have be measured by something like laser scanning.
Reverse engineering the original surface from a tessellated surface is not often easy.
Constructing a quality surface from a scanned mesh is not always easy.
Making a 3D viewer that displays an STL file is VERY easy.
Basically the STL file is a one to one mapping of triangle primitives in OpenGL or DirectX.
There's a little work to be done if you want to optimize the individual triangles in a manner that helps the Graphics Card render them with the highest performance. You would want to get rid of redundant point coordinates by making them indexed. And you would want to organize them into trisets and trifans.
I think there is a huge return for being able to render STL in view only mode .... for very little investment.
For large STL files, it's a bit hard to edit those files in Onshape's toolset; it works fine with small files. However, this is not the recommended way.
The ability to import a STL to actually build into a solid to use is a waste of time. Kernels like parasolid are rubbish at handling facetted polygonal models. I personally dont think Onshape should pursue this approach. If they want to allow edits for imported stls then team up with Materialise and license the Magics tech (just like Autodesk have done with Nettfab recently...well, they bought them).
Give us the tools to model anything using nurbs based approaches or sub div interfaces to nurbs and allow importing, viewing, scaling and measuring from imported mesh formats (not just stl) and this covers 99% of the paying market.
honestly, if your output is stl for printing, there are far better (free) mesh tools around.
on edit :The stl file had to be repaired before being CAD ready in a 20K software...
not usable without repairing:
Indaer -- Aircraft Lifecycle Solutions
Twitter: @onshapetricks & @babart1977
Indaer -- Aircraft Lifecycle Solutions
OK, that makes so much sense it hurts
The best sort of partnership between human and machine, each playing to their strength.
Muchos kudos.!
maybe next year they will downgrade routing into pro as well
Huh... Scanto3D is now in Pro? For some reason I thought it always was. Maybe they weren't selling enough Scanto3D seats. How is the performance on those imported point clouds in Pro?
Ah... I'm in the office this morning and checked my SolidWorks install. I have Premium and it's included there.
I haven't messed with point clouds in my Premium seat of SolidWorks, but maybe I'll have to play around now.
In my (admittedly limited) experience, they tend to convey shape about as well as a flock of starlings or a cloud of confetti.
the second method is to use the point cloud to project onto for curve points or subdivision nodes. This is the technique we use in Modo and TSplines.
depending on your CAD system you can also create curves that intersect with the point cloud itself. Rhino has commands that do this and the scan software the NextEngine provide also does this (but not as well). I'm still experimenting with SolidWorks Scanto3D now that it cones with Pro!
I can certainly see what you describe being useful, but (rightly or wrongly) it seemed to me what was being suggested most recently in this thread was to merely display the point cloud, as we currently can with png/jpg etc
If the analogy was accurate, that would leave us with no ability to use the point cloud other than purely as a visual aid: no measure, snap, infer, project onto, convert to surface, shrinkwrap, or whatever.
A 2D graphic is useful in that scenario because edges within the graphic boundary are defined and unambiguous (independent of viewing angle), but I'm struggling to imagine how that could be the case for the sorts of point clouds I've ever worked with.
in fact we bought our NextEngine scanner for this very reason.
We were designing an injection moulded cover for a ceramic plate. We were given the plate (no data available). We designed the cover, built a prototype and it fitted that plate perfectly. Problem was it didn't fit most others in the production sample! The production process was hand made so the tolerances were wide.
So I bought the scanner, scanned 25 samples, then overlayed the clouds in VX (which at the time we used and was one of the very few CAD systems that handles scan data). We then changed the design and used the overlayed clouds to show (visually) to the customer (a big UK supermarket) that their ideas would not work and the whole premise of their business plan was flawed.
Didn't get any more work from them after that. Some people just don't like to be told their great plans are flawed, despite us saving them probably £100k in tooling and 10x that in a disastrous implementation. But hey, what do I know?
Rhino is also a great tool for this as its plug in architecture has some nice add ons for this. You can also create curves through a point cloud at a specific plane. Very handy.
more info here...
http://wiki.mcneel.com/rhino/reverseengineering
consider that a decent quality scanner will set you back £30-40k, apps like http://www.geomagic.com/en/products-landing-pages/re-designx-wrap come in at the prices to make your eyes water. But they do work well.
but for our use, maybe scanning a few times a year on a NextEngine, we make do with Rhino or Modo
- Bob
Well, except my source polymeshes are from CT scans of patients. Makes it pretty hard to model from scratch (not that I couldn't model the complete skull I did yesterday, but damn that would be a lot of scans and weird planes and lofting...). Much easier to as the PACS system to segment out a polymesh surface with a given houndsfield unit (I did bone yesterday and skin as a second mesh) and let it grind for 10 minutes...