Welcome to the Onshape forum! Ask questions and join in the discussions about everything Onshape.
First time visiting? Here are some places to start:- Looking for a certain topic? Check out the categories filter or use Search (upper right).
- Need support? Ask a question to our Community Support category.
- Please submit support tickets for bugs but you can request improvements in the Product Feedback category.
- Be respectful, on topic and if you see a problem, Flag it.
If you would like to contact our Community Manager personally, feel free to send a private message or an email.
Smooth Surface after Boolean
famadorian
Member Posts: 390 ✭✭✭
I created a hole using boolean operation and the hole is like a polygonal surface. Why is it not a smooth NURBS surface?
0
Best Answer
-
joe_dunne Onshape Employees, Developers, csevp Posts: 198You may be referring to the grey and blue overlap? That is a due to you having two surfaces overlapping that same space. There is a cylindrical surface and a solid part. Just a display thing. If you turn the surface off you will see a smooth holeJoe Dunne / Onshape, Inc.6
Answers
IR for AS/NZS 1100
Internally, your surface is still smooth.
IR for AS/NZS 1100
With the most widespread approach to graphics display, ALL geometry must be represented as a mesh of triangles before being rendered to the screen. Here is an image of what a cylinder's triangles can look like:
You might think "That looks pretty blocky, how does it appear smooth when I rotate around it". The answer is quite literally "A trick of the light". By assigning the correct surface normals at each triangle vertex, the system will interpolate the surface normal at every pixel it draws, and allow the lighting to look smooth (rather than blocky) over the surface of the face, giving it a smooth appearance.
Some interesting resources:
https://en.wikipedia.org/wiki/Graphics_pipeline
https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/shading-normals
And, for the interested, some graphics processes that does not rely on tessellation:
https://en.wikipedia.org/wiki/Ray_tracing_(graphics)
http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/
Bonus! A simple answer to "Why do small complicated features slow down my framerate":
Real-time ray tracing requires some beefy hardware
http://madebyevan.com/webgl-path-tracing/
The tradeoff is that it's fuzzy while you're rotating, and then resolves to look nice over time (Not exactly ideal for CAD).
https://www.youtube.com/watch?v=1IIiQZw_p_E
Notice they rotate, zoom, move a parts, change visual characteristics. Notice what you do not see? modeling. it's not an oversight I suspect.
Hope we did not go off tangent too much.
Just to note that ray-tracing itself does not avoid tessellation, in fact almost every ray-tracing renderer out there is still using tessellated geometry (though as Joe points out, they can handle a lot more of it so you can typically crank up the tessellation). However ray-tracing as a process is independent of the geometric representation used, whether it is polygons, triangles, NURBS, Subdivision Surfaces, distance fields or something else.
Ray-tracers implement a ray-object intersection algorithm, where the object part of that could be a variety of things, but in most cases today, that object is still a triangle (even polygons are usually tessellated into triangles before getting to the raytracer). There are some exceptions, for example VRED has the capability to directly evaluate ray-NURBS intersections instead of ray-triangle intersections and so can operate directly on NURBS geometry without the need for tessellation.
This sounds wonderful but in practice it ends up being too slow and memory intensive to be useful in all but very specialised cases (for example reflector design for headlamps). Years ago we implemented a ray-NURBS intersector but when we benchmarked it against tessellating the same geometry to an insane level so you couldn't really tell the difference, the tessellated version still came out on top in terms of speed and memory. Ray intersection with higher order primitives is however still an active area of research and could be very interesting if these problems are ever overcome, for now, for the vast majority of cases, you're still going to be using triangles.
As Joe also mentioned, ray-tracing renderers (of which there are many types of course) are not so sensitive to triangle count due to their use of acceleration structures, but rather pixel count. In this respect the new AI Denoising techniques (which we do make available in RealityServer as well) are extremely useful and allow better scaling with respect to image resolution.
In terms of interactive changes, we get pretty good results with this in our ray-tracing engine however I wouldn't want to be trying to move objects around at 30fps. The RTX RT Core in-silicon technology however does also accelerate the building of acceleration structures in addition to casting rays and that is what makes interactive changes in ray-tracers slower, so there is some hope there. We are currently working on RTX support for RealityServer so this is a very active topic right now.
Thanks for all the extra info! I learned something new today!