Computational Bio-FSI Laboratory, Department of Mechanical Engineering, School of Engineering & Computing Sciences, New York Institute of Technology, United States
Corresponding author details:
Milan Toma
Computational Bio-FSI Laboratory Department of Mechanical Engineering
School of Engineering & Computing Sciences New York Institute of Technology
United States
Copyright: © 2018 Toma M. This is an openaccess article distributed under the terms of the Creative Commons Attribution 4.0 international License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Validation is the assessment of the accuracy of computational simulations by comparison
with experimental data. A well validated computational fluid dynamics model can be of
high importance when assessing the safety of medical devices. However, its validation
and verification must be conducted before the results can be considered credible. The U.S.
Food and Drug Administration has completed a computational inter-laboratory study that
showed relatively negative current state of numerical methods used for simulating fluid
flow in an idealized medical device, even by self-ascribed experts. Yet, the same numerical
methods are commonly used to simulate fluid flow in much more complex geometries,
especially when patient-specific geometries need to be used. The study presented here recreated these results with larger number of participants and confirmed the need for proper
validation of the numerical methods used. Moreover, the results were analyzed with respect
to the use of grid refinement study by the participants.
Computational Fluid Dynamics; Medical Device; Fluid Flow; Validation; Simulation
To assess the current state of methods used for simulating fluid flow in an idealized medical device, the U.S. Food and Drug Administration (FDA) has completed a Computational Fluid Dynamics (CFD) inter-laboratory study [1,2]. The FDA’s study used generic medical device consisting of a 0.012 m diameter cylindrical nozzle followed by a sudden contraction and 20° conical diffuser, on either side of a 0.04 m long, 0.004 m diameter throat (Figure 1). Planar particle Image Velocimetry (PIV) measurements performed at three laboratories were used to validate the data provided by 28 computational results from around the world. In the FDA study, model dimensions, volumetric flow rates, and fluid properties were specified; while flow solver, mesh density, element shape, inlet/outlet, length, boundary condition details, and laminar or, turbulence models, were left up to participants. Participants were asked to do a grid refinement study to confirm the convergence of their results. Consequently, the CFD results were compared to PIV data obtained in three laboratories. To show the results of the above mentioned study, two of the graphs were recreated (traced) based on the data from [3] (Figure 2). The FDA study-predicted centerline axial velocities in the entry region and conical contraction were in good agreement with the experimental results, but considerable scatter was observed in the throat region and downstream of the sudden expansion. Interestingly, a self-ascribed level of expertise by the project participants did not correlate qualitatively with the success of the validation, i.e. comparing axial centerline velocity predicted by CFD to that measured by PIV.
Some self-ascribed CFD \experts” produced results with large disagreement when compared to experimental data, while some selfascribed \beginners” produced results with good agreement when compared to the PIV measurements (Figure 2). In fact, for the two Reynolds numbers 500 (Figure 2a) and 3500 (Figure 2b) more selfrated \beginners” produced well-validated results than the experts”. Hence, in the current study even higher number of participants has been used to re-create the FDA study. Here, most of the participants are self-identified as \beginners” with a few who rated themselves as intermediate” and even less as experts”. In accordance with the ISO recommendations [4], the participants were asked to perform mesh sensitivity analysis (i.e. repeat calculations with near or coarser grids) to confirm that their results had converged. However, it was subsequently identified that many neglected to do so. Therefore, unlike in the FDA study, the results of the study presented here were further analyzed separately based on whether mesh sensitivity study was conducted or not
In the current study, 33 additional users were given the dimensions of the above described geometry. They were asked to choose any software package to create the geometry and run the simulations for two Reynolds numbers. No instruction regarding the software was given. The two most software packages used were ANSYS® Fluent CFD (Canonsburg, PA) and Autodesk® CFD (San Rafael, CA). The information on particular simulation methods used was not collected during the study. A relatively reliable simulation method is one that matches the experimental data available.
Just like in the FDA study, the participants were asked to do a grid refinement study to confirm the convergence of their results. Similarly, flow solver, mesh density, element shape, inlet/outlet, length, boundary condition details, and laminar or, turbulence models, were left up to participants. Again, only the model dimensions, volumetric flow rates, and fluid properties were given. Their prior experience with creating geometries and running simulations differed. Their level of expertise was self-ascribed, i.e. they were asked to self-rate their experience.
Figure 2: In the FDA study [1], the participants were requested to provide simulation data along the model centerline, among others. The results of the study for two Reynolds numbers,
The graphs were re-created based on the data from [3]
Although all 33 participants were instructed to perform mesh
sensitivity analysis to confirm that their results have converged, after
they delivered the results it was identified by questioning them and
requiring to see the proof, that only 13 of them had actually conducted
the study. Out of these 13 participants, 6 ascribed themselves as
beginners”, 6 as shown in Figure 3.
It has been generally accepted that simulation models can be used to approximate the imitations of the real-world systems. However, to produce accurate and credible simulation models, their verification and validation must be conducted.
Just like in the FDA study, in the study presented here the centerline axial velocities in the entry region and conical contraction were in relatively good agreement with the experimental results compared to the throat region and downstream of the sudden expansion. The participants of the current study were mostly beginners. Hence, the scatter observed in the throat region and downstream of the sudden expansion appears to be larger here than in the FDA study. Without thorough analysis, by keeping track of all the steps taken by the participants, it is impossible to conclude the reasons why many of the results returned did not match the experimental data. The purpose of these studies is to show the reliability of the CFD results from participants without too much supervision, just like it is usually practiced in real-world situations.
Unlike in the FDA study, here it was confirmed whether the participants performed the mesh sensitivity analysis as instructed. The results here are shown separately from those who did conduct the mesh sensitivity analysis shown in Figure 4 and those who did not shown in Figure 3. Similar scatter can be observed in both the groups regardless of their choice to perform the mesh sensitivity analysis. However, only in the first group, where mesh sensitivity analysis was performed, two of the presented computational results matched the experimental PIV measurements. Interestingly, both of them were performed by participants who rated themselves as beginners”. None of the participants self-ascribed as intermediate” or expert” matched their results with the experiments.
As a first step in the validation process, it is recommended to
use the FDA’s idealized medical device to validate the CFD model
before using it to obtain and analyze results with more complex,
e.g. patient-specific, geometries. Furthermore, all details regarding
how the assumptions, simplifications, sensitivity and uncertainty
analyses, might affect the output of the computational model, and
subsequently the interpretation of the results, must be provided
[5]. There is a need for higher standards on the control of numerical
accuracy in CFD as stated in the editorial policy statement on the
control of numerical accuracy from 1986 [6]. Even over 30 years
later, it needs to be reminded that straightforward repeat calculations
with near or coarser grids (and other methods) is necessary for CFD
accuracy estimation.
Figure 3: Results of the current study for two Reynolds numbers,
(a) 500 and (b) 3500. All participants were asked to perform mesh
sensitivity analysis to confirm the convergence of their results, but
not all of them did. In these graphs, only the results of those who
did not perform the mesh sensitivity analysis are shown
Funding: This study was not funded by any grant. No benefits in any form have been or will be received from a commercial party related directly or indirectly to the subject of this manuscript.
Conflict of Interest: The author declares that he has no conflict of interest.
Ethical approval: This article does not contain any studies with
human participants or animals performed by the author.
Figure 4: Results of the current study for two Reynolds numbers,
(a) 500 and (b) 3500. All participants were asked to perform mesh
sensitivity analysis to con rm the convergence of their results, but
not all of them did. In these graphs, only the results of those who
performed the mesh sensitivity analysis are shown
Copyright © 2020 Boffin Access Limited.