🎯 𝗛𝗼𝘄 𝗗𝗼 𝗬𝗼𝘂 𝗖𝗵𝗼𝗼𝘀𝗲 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗙𝗘𝗔 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲? 𝘛𝘰𝘰 𝘮𝘢𝘯𝘺 𝘰𝘱𝘵𝘪𝘰𝘯𝘴. 𝘛𝘰𝘰 𝘮𝘶𝘤𝘩 𝘮𝘢𝘳𝘬𝘦𝘵𝘪𝘯𝘨. 𝘛𝘰𝘰 𝘧𝘦𝘸 𝘴𝘪𝘥𝘦-𝘣𝘺-𝘴𝘪𝘥𝘦 𝘤𝘰𝘮𝘱𝘢𝘳𝘪𝘴𝘰𝘯𝘴. In this post, I walk through a structured decision-making framework for evaluating 𝗙𝗶𝗻𝗶𝘁𝗲 𝗘𝗹𝗲𝗺𝗲𝗻𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 (𝗙𝗘𝗔) 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲, especially when 𝗻𝗼𝗻𝗹𝗶𝗻𝗲𝗮𝗿 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 are a top priority. 𝗛𝗲𝗿𝗲 𝗶𝘀 𝗮 𝗽𝗲𝗲𝗸 𝗶𝗻𝘁𝗼 𝘄𝗵𝗮𝘁 𝗶𝘀 𝗶𝗻𝘀𝗶𝗱𝗲 • Key technical capabilities: multi-physics, nonlinear solvers, contact handling, geometric nonlinearity • Software interoperability, automation, and HPC scaling • Community adoption, validation benchmarks, and licensing models This post will help Engineers, analysts, and researchers who want a clearer picture before investing in FEA software. I also include advanced capabilities like: • MPC-based mesh tying • Large-strain formulations • Adaptive meshing + scripting interfaces • User-defined materials and solver tuning. 💬 𝗪𝗵𝗮𝘁 𝗼𝘁𝗵𝗲𝗿 𝗰𝗼𝗻𝘀𝗶𝗱𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗶𝗻𝗰𝗹𝘂𝗱𝗲 𝘄𝗵𝗲𝗻 𝗱𝗲𝗰𝗶𝗱𝗶𝗻𝗴 𝗼𝗻 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗰𝗵𝗼𝗶𝗰𝗲 𝗼𝗳 𝗙𝗘𝗔 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗳𝗼𝗿 𝗼𝘂𝗿 𝗴𝗼𝗮𝗹𝘀? 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝗴𝗼-𝘁𝗼 𝗙𝗘𝗔 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗳𝗼𝗿 𝗻𝗼𝗻𝗹𝗶𝗻𝗲𝗮𝗿 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀—𝗮𝗻𝗱 𝘄𝗵𝘆? Let’s share practical experience from the field: aerospace, automotive, civil, energy, biomechanics, academia, research, and more. Comment below to continue the discussion.
Engineering Simulation Tools Overview
Explore top LinkedIn content from expert professionals.
-
-
Advancing Seismic Analysis with Nonlinear Structural Simulation in ELS 🌍🏗️🔬 In earthquake engineering, linear analysis often falls short when predicting how structures behave under extreme seismic events. This is where nonlinear structural seismic analysis powered by Extreme Loading for Structures (ELS) makes the difference. 🔹 Why Use Nonlinear Seismic Analysis in ELS? ✅ Captures Progressive Damage & Collapse – Unlike traditional methods, ELS simulates cracking, crushing, and large deformations to show how structures truly behave during earthquakes. ✅ Beyond Elastic Limits – Buildings don’t simply "bounce back" after a quake. Because ELS uses solid elements modeling, ELS automatically calculates plastic hinges, material degradation, and structural failure mechanisms for a more realistic response. ✅ Better Retrofitting & Resilience Planning – Engineers can test retrofit strategies in a virtual environment to improve performance before investing in real-world solutions. ✅ Full-Structure Collapse Simulations – Instead of relying on assumptions, ELS enables true nonlinear dynamic analysis, providing insight into collapse mechanisms and structural vulnerabilities. As seismic risks continue to rise, advanced simulation tools like ELS are essential for designing structures that are not only code-compliant but truly earthquake-resilient. #SeismicEngineering #ExtremeLoading #StructuralAnalysis #NonlinearAnalysis #ResilientDesign #EarthquakeSimulation #StructuralResilience
-
Ultrasonic flowmeters are used to determine the velocity of the fluid flowing through a pipe. The principle is to send an ultrasonic signal across the flow at a skew angle. In case of no flow, the transmitting time between the transmitter and the receiver is the same for the signals sent in the upstream and the downstream directions. Otherwise, the downstream traveling wave moves faster than the one traveling upstream. In many cases piezoelectric transducers are used to send and receive the ultrasonic wave. This #COMSOL example simulates an ultrasonic flowmeter with piezoelectric transducers in the presence of a background flow. This #simulation approach is based upon the discontinuous Galerkin (dG) method which is well suited for acoustically large transient problems. The #multiphysics problem involves acoustic-structure interaction in moving fluids and piezoelectric effects. The acoustic acoustic interaction is modeled with the Elastic Waves, Time Explicit, and the Convected Wave Equation, Time Explicit, physics interfaces. The #peizoelectric is modeled with the Piezoelectric Effect, Time Explicit. The model takes advantage of a geometry assembly and a nonconforming mesh. The flowmeter consists of a main pipe and a signal pipe of a smaller diameter. The signal tube is tilted to the main pipe at the angle α = 45°. The pipe walls are considered rigid. There are two transducers placed at either end of the signal pipe. They operate as a transmitter and a receiver. Both transducers are identical and consist of a piezoelectric unit, a matching layer, and a damping block. An input voltage signal applied to the transmitter results in the mechanical deformation of the piezoelectric transducer due to the inverse piezoelectric effect. The mechanical deformation generates an acoustic wave in the fluid. When the acoustic wave reaches the receiver, the inverse process takes place: the mechanical load is being converted into an electric signal because of the direct piezoelectric effect. The signal emitted by the transmitter propagates into the fluid at t = 4 μs (upper-left corner). The signal reaches the upper wall of the main pipe at t = 6 μs (upper-right corner) and propagates further in the signal pipe to the receiver at t = 8 μs (lower-left corner). The signal reaches the end of the signal tube and generates an elastic wave in the receiver at t = 10 μs (lower-right corner). The elastic wave in the receiver piezoelectric element is converted into an electric signal. #flow #flowmeter #sensors #physics #oilgas
-
⚡ "What's my biggest challenge? I'd say finding the right balance of people, hardware, and software for our analysis work." - Manager of a simulation team at a $1B+ engine manufacturer When it comes to the engineering simulation space, things change fast. Analysts, more so than most other roles in product development and engineering, really take advantage of cutting-edge technology changes. Being technically adept, they can find applications of new tech on their own, independent of any kind of IT department. So, we like to keep up to date with this role through interviews with our Industry Expert Program (IEP), our panel of folks with decades of experience in the industry. So, here I was a week ago on Friday, April 12th, talking with this simulation team manager. And his answer caught me off guard. I've been involved with the analysis space since around 2000, but I'd never heard a challenge phrased in quite this way. The following is a paraphrased summary of the conversation that followed that response. 👩💼 PEOPLE—"This is the most expensive and finite component," stated the simulation manager. Some analysts specialize in one domain. Matching that specialty to the job's needs can be difficult. It doesn't always line up. You need a sufficient foundation for the type of work you are going to do. Management and engineers expect the results to be accurate. 💻 HARDWARE—This is the easiest component to manage. The good and the bad bit is that it changes very quickly. It's really dynamic. The cost of compute hardware (HPC) is almost nothing, so it is very affordable. And it is a great way to accelerate run completion. Figuring this out is the least of the simulation manager's worries. But he still keeps up to date on HPC advancements, as much as or more so than software. Hardware is becoming infinite. 📂 SOFTWARE—This is the most challenging component. Actually, it is the licensing of software that is the most difficult. Depending on the job, a different pre-processor, solver, or post-processor is needed. Getting licenses and paying for them out of a limited budget is the hardest part. Balancing all that out is very difficult. 🌏 Now, this person also shared that this equation is different depending on which region of the world you are talking about. In the United States, people are the most expensive. Software is next. The cost of compute power is almost nothing. In India, the cost of people and software is about the same. I know we have some tenured folks from the simulation space out here. How does this line up with your experience? Are there other factors that rank high regarding biggest challenges for analysis? As always, I'm interested in your thoughts. Follow Lifecycle Insights for #research and guidance on #digitaltransformation for #engineering and #manufacturing. We'll be sharing more insights like this one from our interviews, studies, and publications.
-
Analysis of Production Lines by Simple, Fast and Scientific Simulation: Apparently, there is no powerful tool in Lean, TOC, Six Sigma, ERP/MRP, Industry 4.0, Generative AI, etc. for simple, easy, quick and sensible analysis of dynamic nature of production lines which are influenced by numerous factors like average cycle time, variation in cycle time, number of resources available, resource calendars, resource speeds, failures and repairs of resources, rework/rejections, etc. Factory Physics / Operations Science is helpful to some extent in this regard but it is not adequately flexible. In my opinion, discrete event simulation (DES) is a powerful, unique method for thorough analysis of dynamic nature of production lines and the effects of those factors. DES is however largely ignored in production systems even by engineers and managers who have a course on DES in college. DES is usually done in industries by simulation experts using sophisticated simulation packages. DES is still considered as fancy or alien by many factory people and consultants. I would emphatically say that DES can be run for production lines easily, quickly, effortlessly and sensibly using simple, scientific software tools like FlowshopSim which are created exclusively for simulating production lines at a high speed. This DES does not require formal simulation knowledge at all. However, I would not recommend watching time-consuming animation in simulation. For analysis purpose, I would look into output summary and the trace of simulation available in graphical and tabular forms. If any engineer/manager or a Lean consultant wants to witness such production line simulation, I would be happy to run FlowshopSim over web for any specified scenarios of a production line. DES in FlowshopSim will not take more data, time and effort than VSM. It quickly provides a lot of knowledge about the production line to be simulated and is far more effective than #vsm for finding bottlenecks and improvement opportunities on the line. Moreover, it facilitates fast, extensive and reliable what-if analysis of the system. What-if analysis of a stochastic production system is absent in all other methodologies for manufacturing systems. The simple and powerful FlowshopSim leverages the knowledge and experience I gained in simulation and scheduling over more than 40 years (after my PhD) as a researcher, academician and manufacturing consultant. Two days ago, I demonstrated over web simulation of various scenarios of a production line to a senior manufacturing consultant Jean-Pierre Goulet, P. Eng., M. Sc. A. in details for more than an hour. I believe he noticed its power, speed, versatility and simplicity for simulating production lines. Intelligent analysis of a system can make continuous improvement drive more efficient. Let us look for improvement in tools and methodologies also. #factorysimulation #productionline #lean #flow #continousimprovement
-
I’ve seen bottlenecks destroy production lines—here’s how I would eliminate them before they hit the bottom line Elevating operational efficiency is more than a goal; it’s a strategic imperative for industry leaders. For executives focused on maximizing profitability, Discrete Event Simulation (DES) is a game-changer. Here’s how DES can transform your production line from a complex operation into a streamlined, profit-generating machine. 𝗧𝘂𝗿𝗻𝗶𝗻𝗴 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗶𝗻𝘁𝗼 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 DES models your production line, accurately representing every process and bottleneck. This isn’t just a digital replica—it’s a decision-making platform. By analyzing scenarios, you can predict outcomes and implement strategies for real-world improvements. 𝗘𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗶𝗻𝗴 𝗕𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸𝘀 DES pinpoints exactly where your production line slows down. By targeting these areas, you can speed up operations and reduce costs, ensuring resources are fully utilized. 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗔𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 In manufacturing, resources are often stretched thin. DES tests different resource allocation strategies without disrupting operations, leading to more efficient use and direct cost savings. 𝗕𝗼𝗼𝘀𝘁𝗶𝗻𝗴 𝗧𝗵𝗿𝗼𝘂𝗴𝗵𝗽𝘂𝘁 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 𝗔𝗱𝗱𝗲𝗱 𝗖𝗼𝘀𝘁𝘀 Imagine increasing output without new equipment or expanding your workforce. DES makes this possible by simulating changes in line configuration or scheduling, ensuring maximum efficiency. 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 “𝗪𝗵𝗮𝘁-𝗜𝗳” 𝗦𝗰𝗲𝗻𝗮𝗿𝗶𝗼𝘀 In a constantly evolving landscape, agility is key. DES offers a risk-free environment to test scenarios like introducing new equipment or altering schedules, helping you make informed strategic decisions. 𝗔𝗰𝗵𝗶𝗲𝘃𝗶𝗻𝗴 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 𝗟𝗶𝗻𝗲 𝗕𝗮𝗹𝗮𝗻𝗰𝗶𝗻𝗴 A balanced production line is essential for maintaining efficiency. DES simulates different workload distributions, ensuring smooth operation and reducing costly disruptions. 𝗗𝗮𝘁𝗮-𝗗𝗿𝗶𝘃𝗲𝗻 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 DES turns complex data into actionable insights. Regularly updating your simulation model keeps your production line optimized in real-time, boosting efficiency and positioning your organization as a leader in manufacturing innovation. 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗔𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲 𝗼𝗳 𝗗𝗘𝗦 😊 For Operational leaders and C-level executives, DES isn’t just about optimizing operations—it’s about driving tangible results. By leveraging DES, you can turn data into dollars, making smarter decisions that directly impact your bottom line. In a world where efficiency is key, DES offers the strategic advantage needed to lead with confidence and achieve sustained success. ------------------------------------------------------- Looking to stay ahead in your game? ♻️ Repost and follow Krish Sengottaiyan for valuable insights!
-
A conversation I overhead recently (names and situation altered) - Two engineers: * Anya: An experienced simulation expert. * Mark: A more traditional, less simulation-focused engineer. Scene: Local Starbucks. Anya and Mark are on a coffee break looking frustrated. Mark: Another massive powertrain recall, Anya. This is getting ridiculous. Fuel pump issues again. It just screams manufacturing quality problems to me. The assembly line must be dropping the ball. Anya: While manufacturing execution is crucial, I think we're still underutilizing a critical tool that could catch many of these "manufacturing" issues long before they hit the production line... simulation. Mark: Simulation? Come on, Anya... that's for the design guys. Great for figuring out if a gear ratio works or if a new piston design can handle the combustion pressures. But preventing a faulty part from the line? That's a hands-on manufacturing problem, not a simulation one. Anya: That's a common misconception Mark. The power of simulation extends far beyond initial design. Manufacturing variations are real and how components interact under operating conditions after assembly with these variations is where issues often hide. Mark: But we have testing for that! We run the powertrains on dynos, put vehicles through rigorous road tests. Anya: And those are essential, but simulation makes testing more effective and predictive. We can use simulation-driven testing to explore a much wider range of conditions and variations than physical tests alone. We can simulate the stresses on components with realistic manufacturing tolerances included, finding potential failure points much earlier. It's about understanding how the design behaves with real-world imperfections. Mark: So you're saying simulation isn't just about the initial design but about predicting problems caused by how things are actually made? Anya: Exactly. It's about a digital thread from concept through manufacturing. By integrating simulation deeper into testing and manufacturing, even using digital twins of our production lines with real-time data we can predict potential defects influenced by manufacturing variables before they cause mass recalls. Relying only on late-stage testing is just too late. Mark: Hmm. I… I hadn't really thought about it that way. It's a much more integrated approach than I imagined. Anya: It is. And it's key to moving from reactive to predictive quality saving us significant costs and protecting our reputation.
-
Flashback reflection.... When I was at Cruise, the biggest challenge was the sim-to-real bridge. We could train reinforcement learning policies in simulation, but the hard part was evaluating whether the outputs were close enough and physically plausible to trust in the real world. That’s the core issue with RL for autonomy: simulators are only proxies of reality. Pure sim training rarely transfers to crucial applications—policies often exploit quirks that don’t exist outside the lab. The good news? With domain randomization, sim-to-real transfer, and targeted real-world fine-tuning, policies can generalize enough for constrained tasks. But for safety-critical autonomy, simulation alone is never enough.
-
Modeling Extreme Flooding Scenarios Virtually Engineering vehicles to safely traverse flooded roads requires accounting for immensely complex hydrodynamic forces and multiphase interactions as waters rise chaotically into engine bays. Advanced simulation develops high-resolution digital twins replicating vehicles battling rapid flows, unpredictable spray, and partial submersion across thousands of topology-varying flood scenarios based on real-world datasets. By exploring aquatic extremes digitally first, automakers mitigate reliance on risky physical prototypes when designing beyond boundaries considered implausible just years ago. Simulation reshapes possibility frontiers. #simulation #multiphysics #cfd #fea #dynamics #fluiddynamics #automotive #aerodynamics #hydrodynamics #Durability #testing #CAE #PLM #systemsmodeling #electromagnetics #heattransfer #AI #optimization #design #NVH #manufacturing #innovation #engineering
-
Big Models, Big Challenges, and Bigger Lessons Lately, I’ve been deep in the trenches with a large FEA model in RISA 3D—modeling reinforced concrete walls and slabs for a major structure. Stats from the model: · 31,342 nodes and 31,498 plate elements · 188,052 degrees of freedom · 145 load combinations · Plate internal force results: 4,567,210 lines 😅 Given the scale, we needed to process all these results to extract envelope values for design—and our concrete design tools are based in Excel. Challenge: Excel has a max of 1,048,576 rows per sheet. Solution: I had to think like a data engineer and bring in a big data mindset. I leveraged Power Query, a true big data tool within Excel, to: · Split the massive result set across multiple sheets · Automate data processing Yes, it worked. And yes, the Excel files became painfully slow—with multiple lookups across sheets. It was a grind, but I got the design data I needed 💪 Now, I’m seriously considering more robust solutions—maybe Python or something else for post-processing tasks like this. Any suggestions? 🔍 Here’s where I’d love your input: How do you handle massive FEA result sets? Any favorite tools, libraries, or workflows to replace heavy Excel-based post-processing? Tips to speed up complex lookups across large datasets? Appreciate any insight from the community. Always learning and open to better ways! #StructuralEngineering #FEA #Excel #PowerQuery #Python #DataProcessing #RISA3D #ConcreteDesign #EngineeringTools #LearningByDoing #EngineersOfLinkedIn