Welcome to the Onshape forum! Ask questions and join in the discussions about everything Onshape.
First time visiting? Here are some places to start:- Looking for a certain topic? Check out the categories filter or use Search (upper right).
- Need support? Ask a question to our Community Support category.
- Please submit support tickets for bugs but you can request improvements in the Product Feedback category.
- Be respectful, on topic and if you see a problem, Flag it.
If you would like to contact our Community Manager personally, feel free to send a private message or an email.
Document Size, Complexity and Performance
edward_petrillo
Member Posts: 81 EDU
I'm working in a document containing over 30 tabs with two large parts studios (one with >120 parts, the other with >75 parts), a high level assembly with a total of about 300 instances in two subassemblies, and about a dozen imported parts . This is the largest document I've ever assembled after working with several documents approximately one-third the size of this one. About 90% of my work sessions with the large document are characterized by what I consider slow to adequate performance: switching between tabs or opening a tab in a new browser tab can take from 15 seconds to a minute or more; editing dimensions in a sketch can take over 15 seconds to update; waiting for the "Insert" dialogue to populate in order to add a derived part or add to an assembly can take 15-30 seconds. Waiting for the document to rebuild after rolling back or editing a feature high in the feature tree can take a very long time. Some work sessions (one in ten?) are so slow that the program cannot keep up with the most rudimentary sequence of mouse clicks, such as selecting sketch entities for an extrude or other feature, and my normal rhythm of work leads to mistakes that must then be corrected. Switching between tabs often triggers an "Aw, snap" window from Chrome and yet another refresh of the browser. As a global estimate, I figure one-half to two thirds of my time is spent waiting for the program to respond to my actions. It's hard to do direct comparisons, but performance seems considerably worse for this document than for smaller ones. These observations seem consistent across an older desktop, a newer one, and a brand-new Chromebook. [My usual location is giving me 30mbps upload/download speeds]
How complex are your documents, and how does your Onshape performance compare with my experience? Onshape is selling us the single document paradigm, multipart modeling and the benefits of cloud computing. I'm pretty well sold on these innovative concepts, but my enthusiasm is slowed by the suspicion that performance could be better.
How should Onshape performance track with document size and complexity? What's the balance between server response and browser response? What document attributes affect performance most? What strategies can be employed to optimize performance when one is building a large, complex model?
How complex are your documents, and how does your Onshape performance compare with my experience? Onshape is selling us the single document paradigm, multipart modeling and the benefits of cloud computing. I'm pretty well sold on these innovative concepts, but my enthusiasm is slowed by the suspicion that performance could be better.
How should Onshape performance track with document size and complexity? What's the balance between server response and browser response? What document attributes affect performance most? What strategies can be employed to optimize performance when one is building a large, complex model?
4
Comments
I did a similar project on SolidWorks before, on a slow Pentium4 with a passively cooled graphics card. While it slowed down during the operations directly connected with the grid, all the rest worked fine.
I these more complex jobs i have started to spread it over several documents, will have a top document that has the G.A. and drawings, then another document for each sub-system. Using linked documents and labels, it can improve performance quite a bit.
My design has a number of complicated 'master sketches' at the very top of the feature tree that are related to pretty much every part of the design. However, editing or adding new features at the bottom (new end) of the tree is still very slow, which is puzzling to me. Other issues that have started to appear: I often have to wait 15-30s for the section view dialog box (shift-8) to work, or for the measurement dialog to show up. When I load 2d drawings, or refresh them (yellow circle/arrow) after making an edit, certain drawing views will be missing and won't show up for 30+ seconds, even though all the markup (dimensions etc) are visible.
OnShape has been investigating the issue but so far their response is that this experience would be typical in any CAD environment, cloud or otherwise. I'm skeptical of this answer - can anyone else confirm this?
Going forward I will be trying to do two things: 1) figure out how to bring my current design back to a workable state and 2) educating myself on how best to avoid these problems in the future.
I would greatly appreciate any tips anyone can share on the above, though I admit I haven't had time to put in effort here myself.
I don't recall encountering such limitations with desktop CAD software (Inventor/Solidworks) and unless I am able to figure out 1 and 2 I may have to move out of the cloud.
@michael3424 - I haven't used a lot of text so I can't comment on your issue, but what is your total number of features?
You could try deriving your master sketch into several Part Studios so that each Part Studio has less work to do, or maybe consider moving some tabs into another Document. Please be aware that Document size in MB has no bearing on load times since the only internet traffic is graphics and UI. Onshape support will of course help you resolve the issue.
https://www.onshape.com/cad-blog/what-hardware-gives-you-the-best-onshape-experience
Even thought there are few things to tune performance better, I surely hope Onshape does everything in it's power to make it faster and faster over time.
Additionally, hitting this performance roadblock awakened my interest in In-Context Editing. This capability makes it much easier to separate a model into smaller parts studios and still exploit relationships among all parts and features by using ICE in a top-level assembly. A complex layout sketch in a top-level assembly can be referenced while editing any of the studios in context. I'm using this approach as a simpler alternative to @NeilCooke's suggestion above.
Regarding in-context editing and breaking up the design into multiple part studios, I will need to educate myself on best practices - is it something that can be done 'organically' as I'm working on a brand-new design? In other words, do I need to already have a pretty good idea of the design logic so that I can structure the CAD correctly, or can I kind of figure it out as I go? Do I need to have a complete vision of how the part studios will interact, what features will be related to each other across different part studios, etc? I spend a lot of time thinking about my designs and sketching/calculating on paper or in throw-away sketches before starting CAD, but I am not used to thinking through the CAD layout beforehand... perhaps I'm overestimating what's required here.
My second major question is: if I learn and follow best practices for laying out CAD in future designs, will I continue to see issues like this when the design reaches this size? Does OnShape or anyone else have a large public model that I can copy and play around with to prove to myself that this won't be an issue?
Thumbs-up on the idea of a complexity score, @edward_petrillo
I (maybe foolishly) started to design an whole aircraft in one document.
My biggest part studio has 603 features and 239 parts right now.... and that is already cleaned away complexity (placeholders for reoccurring sub-assemblies and other tweaks).
One problem is the slow connection we have at work (yeah.. germany has some slow spots.... and we found one).
Here we have 16mbit/1mbit DSL. At home i use 200mbit/32mbit cable, that changes a lot but not all.
Onshape performance sometimes is kind of daily-dependent. But this impression could be related to support changing something with my account several times, f.e. cache adjustments.
My design is pretty space and interference related so i good used to do a lot in one part studio. Dividing this up would be a big pita. That is related to @dylan_gunns first major question on who to structure correctly.
I wanted to compare performance of firefox and chrome because firefox is advised as being better for big document cause of better ram allocation. didnt work, the newest firefox beta and non-beta crashes while loading that big document on two different machines.
So im working in chrome, of course 64bit.
We are also working on some changes that will make global performance more consistent, but often times it is your local internet connection that gets congested. We invest a lot of time and money in making sure that our servers aren't ever overloaded (we automatically add capacity as needed).
Stay tuned. We're making Onshape better every day and we're a long way from being done. :-)
Hello John,
that wasn´t meant as an attack. I am pleased with onshape, your customer care and the box of goddies you open up every few week.
The only thing that i maybe miss is some advice what we as customers could do, which potholes we could avoid while working in os.
Maybe your team is learning now where the culprits are and how to avoid them? Shall i divide my document cause i am doing wrong till now or is your focus on accelerating your system?
Splitting up your document across multiple documents works well especially if you have definable sub-assemles, normally I will have a top level and drawing document and a seperate document for sub-assemble. Using linked documents and in-context you can handle work flow across multiple documents in a controlled and logital behaviour.
Note you don't need to have a plan at the start about how you will splitting up your work. You can create sub-assemles, move tabs to other documents, etc after the design is in progress and you will still see a improvement.
The work flow is similar to creating a user library [see discussion https://forum.onshape.com/discussion/5730/reusing-parts#latest for tips], this is taken from trial and error and I would still like to see a webinar from onshape of best practices and methods for handling large document sets, showing linked documents, revision control, etc.
@john_rousseau & others
How big affect does document history have on these cases (I don't mean feature history but 'version history'?
We all have seen how 300mb document can shrink to 3mb when making a copy that purges history data.
If that has any affect then we should have a way to purge redundant history and only keep what we actually might need.
For some reason, I have noticed more performance issues in the evening (GMT+2) than daytime - which is good for my work but may be related to amount of simultaneous users worldwide.
Our ultimate goal is that users would have a good modeling experience without ever having to think of the details I'm about to share, but we are not there yet.
1. @3dcad Normally size of document history would not affect performance except for performance of pulling up version graph.
2. Having multiple elements in the same workspace might cause slow down in loading document and, in case of intricate dependencies, in editing. If you start felling such a slow down, it might be the time to think of parting out stabilized parts of the design into a separate document.
3. Modifying the feature at the beginning of the feature history causes full regeneration of part studio. It is better if the layout sketch contains only necessary data.
4. Judging how much to do in the same part studio might be hard. Most of the time when loading the part studio you don't have to wait for complete regeneration because we have pre-computed data around, but in the worst-case scenario there might be a need for complete regeneration. The length of feature list itself or complexity of pre-existing geometry should not cause slow down in adding new features, but often it does. We are working to improve this.
5. If you are using derived part feature, think of it as regenerating the part studio you derive from (All of it, even if all you use is the sketch in the beginning of the history) when the deriving part studio is regenerated. It is best to keep the part studios you derive from simple and change them infrequently. You don't want to have long lists of derivation (B derived from A , C derived from B ), while having multiple derived is ok ( B derived from A and C derived from A). Having two derived features deriving from the same part studio is also a performance burden. It is better to select all the objects you need in one feature.
6. It is possible to hurt your part studio performance by adding small details to a large part. (e.g. 10 tiny holes in a block ). Here it is not so much geometry regeneration, as tessellation and rendering which are suffering. It is a good idea to leave holes, text logos etc. to the very end of modeling. Advise we often give to customers is to suppress such features while modeling and un-suppress them at the very end.
This is the difference I'm talking about:
7. Face pattern works faster than body pattern + Boolean, either of them is faster than feature pattern. Feature pattern means that all features being patterned are regenerated # of instances times.
1. Break Up Part Studios
If there is no reason to model the parts together, don't. I would ask yourself how much interaction between parts do you really have? If it's one or two features, many times these can easily be added after the model has been derived in. Spending some time before starting, to layout what parts should be in what Part Studios is worth it for more complex designs. In-context assembly design can also help with this. In general, the fewer features in a Part Studio the faster it is. However, this depends heavily on the complexity of a feature which leads me to the next point.
2. Avoid Complex Aesthetic Features
This is somewhat relative I realize, but often times features are added purely for aesthetic purposes. Some examples would be creating threads with a helix and sweep. Another common culprit is extruded text. But probably the most common is huge patterns. All of these are particularly bad because they can have a big impact on both rebuild time and graphics performance. And they are very easy to create. You can literally go from a very simple part to a very complicated part in one feature (in a few seconds). Ask yourself, how important is it that I see that while working everyday? Is it critical to the creation of other features?
3. Use Assemblies
Do NOT model multiple instances of a part in a Part Studio. This is an easy way to create needless complexity. Instances of the parts should be inserted at the assembly level, where they are handled much better. So to be clear, only model one of each part in your Part Studio. Then insert all of the instances of that part in the assembly.
Another thing I wanted to address was the questions about connection speed. Onshape does not need an extremely fast connection to work. However, you do need a consistent connection. If your internet drops off and on frequently this can lead to disconnects to the Onshape servers and obviously cause performance problems. Filing a ticket with support using the feedback tool can help troubleshoot these issues.
I realize there are many out there who will read this and say "I don't care, make it faster." And we are. As @john_rousseau mentioned we are actively working on (and will forever work on) making Onshape faster.
1. As far as "small details" go, explicitly-modeled threads are particularly bad, as they increase both the tessellation and the B-rep complexity significantly. Consider putting them in last and modeling with them suppressed. Cosmetic thread support is coming.
2. If RMB-rotating a small part studio is slow or choppy, the problem is graphics on the client, it is not the network and not our servers. Go to https://cad.onshape.com/check and make sure you see what you expect to see (e.g. your discrete GPU as the GL renderer in if you're on a laptop).
Most of my signifigant performance issues relate to imported models where I can't suppress any features. And yes, they have a lot of unnecessary details.
Any tips for making them faster?
I'm sure someday calculation power exceeds real world demands but while waiting for that it would be really nice feature to 'lighten' model automatically.
For example, if I model a detailed drill-box with all the bells and whistles - then I mount it into cnc machine - which goes into production line - which goes into factory layout - you get the point. In factory layout drill-box can be just a dummy-box but I would like to have only one truth of each piece instead of holding light-copies for better performance.
You can try Delete face feature (smart selection might be very helpful) to simplify imported geometry.