Welcome to the Onshape forum! Ask questions and join in the discussions about everything Onshape.

First time visiting? Here are some places to start:
  1. Looking for a certain topic? Check out the categories filter or use Search (upper right).
  2. Need support? Ask a question to our Community Support category.
  3. Please submit support tickets for bugs but you can request improvements in the Product Feedback category.
  4. Be respectful, on topic and if you see a problem, Flag it.

If you would like to contact our Community Manager personally, feel free to send a private message or an email.

normalize() vs @normalize() std library speed difference. You wont believe the difference.

Jacob_CorderJacob_Corder Member Posts: 137 PRO
I spend a lot of time profiling featurescripts and optimizing code to reduce execution time. Normally I need to use the @ functions as they are much faster.  I have my own normalize function that is about 3x faster than onshapes because it just uses arrays and does not use ValueWithUnits.  I found that there is an @normalize function. So I gave it a shot and wow the speed difference is insane.

normalize() took 1.29 seconds for 19906 calls.  however, @normalize only took  79ms.  That is over 16x faster... 






I just want to understand why the standard library normalize() function does not  call @normalize.  It would significantly reduce compute power for onshape to run everywhere.  Sometimes normalize is 80 to 90% of some of my features execution time due to calling anything with a plane, line, coordinate system but mostly evEdgeTangentLines or evFaceTangentPlanes which both call normalize a lot. 

I assume that there is a reason as @normalize has been around as far back as version 46. 

I understand that not many of you are calling normalize or it being called 20,000 times through another feature but globally, every second, I would bet it is much higher than that number. 

Comments

  • _anton_anton Member, Onshape Employees Posts: 410
    Right, so normalize() is a series of function calls and math in FS, while @normalize() is just a few floating-point operations in a black box. With FS being an interpreted language, the latter is, of course, faster. The overhead of FS is large compared to the very simple builtin; for more complex builtins, the difference should be less noticeable.

    I seem to remember that, a few years ago, there were document upgrade issues with switching to the builtin, for some deep floating-point-operation-order reason. For new features, you can use the builtin, though that does come with all the caveats of FS calls we don't explicitly make public.
  • Caden_ArmstrongCaden_Armstrong Member Posts: 173 PRO
    I did some testing to look at other functions and how builtin functions compare in performance.
    Normalize probably has the biggest gain/simplicity ratio, my test even saw a 46x improvement.
    The next biggest improvement is the @constructPaths function (marked internal only) which is 15x faster than the constructPath option.
    I still need to go through all the op functions and see which are worth the gains.

    What is really interesting is the speed difference in how objects are constructed.
    When using the @evPlane for example, the result is not in a value with units, so you have to cast the result to a vector with units, but how you do it changes the speed.

    var origin = vector(plane.origin[0]*meter,plane.origin[1]*meter,plane.origin[2]*meter);
    vs
    var origin2 = (plane.origin as Vector) * meter;

    The second way is 3x slower. And skipping the *meter makes the first one another 3x faster.
    its an extra 300ms in a loop that run 10,000 times, but if you are really looking for incremental gains, skipping units and object types can really add up over the course of a function.

    www.smartbenchsoftware.com --- fs.place --- Renaissance
    Custom FeatureScript and Onshape Integrated Applications
  • Jacob_CorderJacob_Corder Member Posts: 137 PRO
    @caden_armstrong
    It is faster to * meter for the plane origin for each value because of the operator * in vector uses a for loop but they check the size in the i < size(vector) vs declaring a variable like this var sz = size(vector). its a small amount, but its something along with the loop itself taking time. I use my own functions to handle this.   array->toVectorWithUnits3D(). removes the for loop and handles it all. since theoretically vectors can have any dimension, onshape doesn't have a choice but to loop on it during operators. 

    One interesting thing I found about planeFromBuiltIn (called by evPlane and evFaceTangentPlanes). They normalize the normal and x vectors.  I ran a test of many parts and bodies sampling their faces, it probably created over 100k planes.  After calling normalize(), I checked the before and after values, Not one of the planes normal or x changed . So that means that 60 to 80% of the time to create a plane from ev functions is just wasted on doing nothing. I cannot say for certain if this is always the case, but it was for my tests. 

    I really do love how they set up the std library with operators and the transformations built in. It really makes it easy to create features and just learn this stuff.
Sign In or Register to comment.