Sample records for computing sensitivity coefficients

  1. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1992-01-01

    Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.

  2. Determination of aerodynamic sensitivity coefficients for wings in transonic flow

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.; El-Banna, Hesham M.

    1992-01-01

    The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.

  3. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  4. Determination of aerodynamic sensitivity coefficients based on the three-dimensional full potential equation

    NASA Technical Reports Server (NTRS)

    Elbanna, Hesham M.; Carlson, Leland A.

    1992-01-01

    The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.

  5. On Learning Cluster Coefficient of Private Networks

    PubMed Central

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2013-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843

  6. Determination of aerodynamic sensitivity coefficients in the transonic and supersonic regimes

    NASA Technical Reports Server (NTRS)

    Elbanna, Hesham M.; Carlson, Leland A.

    1989-01-01

    The quasi-analytical approach is developed to compute airfoil aerodynamic sensitivity coefficients in the transonic and supersonic flight regimes. Initial investigation verifies the feasibility of this approach as applied to the transonic small perturbation residual expression. Results are compared to those obtained by the direct (finite difference) approach and both methods are evaluated to determine their computational accuracies and efficiencies. The quasi-analytical approach is shown to be superior and worth further investigation.

  7. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  8. Transonic Blunt Body Aerodynamic Coefficients Computation

    NASA Astrophysics Data System (ADS)

    Sancho, Jorge; Vargas, M.; Gonzalez, Ezequiel; Rodriguez, Manuel

    2011-05-01

    In the framework of EXPERT (European Experimental Re-entry Test-bed) accurate transonic aerodynamic coefficients are of paramount importance for the correct trajectory assessment and parachute deployment. A combined CFD (Computational Fluid Dynamics) modelling and experimental campaign strategy was selected to obtain accurate coefficients. A preliminary set of coefficients were obtained by CFD Euler inviscid computation. Then experimental campaign was performed at DNW facilities at NLR. A profound review of the CFD modelling was done lighten up by WTT results, aimed to obtain reliable values of the coefficients in the future (specially the pitching moment). Study includes different turbulence modelling and mesh sensitivity analysis. Comparison with the WTT results is explored, and lessons learnt are collected.

  9. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  10. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  11. Detecting prostate cancer and prostatic calcifications using advanced magnetic resonance imaging

    PubMed Central

    Dou, Shewei; Bai, Yan; Shandil, Ankit; Ding, Degang; Shi, Dapeng; Haacke, E Mark; Wang, Meiyun

    2017-01-01

    Prostate cancer and prostatic calcifications have a high incidence in elderly men. We aimed to investigate the diagnostic capabilities of susceptibility-weighted imaging in detecting prostate cancer and prostatic calcifications. A total number of 156 men, including 34 with prostate cancer and 122 with benign prostate were enrolled in this study. Computed tomography, conventional magnetic resonance imaging, diffusion-weighted imaging, and susceptibility-weighted imaging were performed on all the patients. One hundred and twelve prostatic calcifications were detected in 87 patients. The sensitivities and specificities of the conventional magnetic resonance imaging, apparent diffusion coefficient, and susceptibility-filtered phase images in detecting prostate cancer and prostatic calcifications were calculated. McNemar's Chi-square test was used to compare the differences in sensitivities and specificities between the techniques. The results showed that the sensitivity and specificity of susceptibility-filtered phase images in detecting prostatic cancer were greater than that of conventional magnetic resonance imaging and apparent diffusion coefficient (P < 0.05). In addition, the sensitivity and specificity of susceptibility-filtered phase images in detecting prostatic calcifications were comparable to that of computed tomography and greater than that of conventional magnetic resonance imaging and apparent diffusion coefficient (P < 0.05). Given the high incidence of susceptibility-weighted imaging (SWI) abnormality in prostate cancer, we conclude that susceptibility-weighted imaging is more sensitive and specific than conventional magnetic resonance imaging, diffusion-weighted imaging, and computed tomography in detecting prostate cancer. Furthermore, susceptibility-weighted imaging can identify prostatic calcifications similar to computed tomography, and it is much better than conventional magnetic resonance imaging and diffusion-weighted imaging. PMID:27004542

  12. Detecting prostate cancer and prostatic calcifications using advanced magnetic resonance imaging.

    PubMed

    Dou, Shewei; Bai, Yan; Shandil, Ankit; Ding, Degang; Shi, Dapeng; Haacke, E Mark; Wang, Meiyun

    2017-01-01

    Prostate cancer and prostatic calcifications have a high incidence in elderly men. We aimed to investigate the diagnostic capabilities of susceptibility-weighted imaging in detecting prostate cancer and prostatic calcifications. A total number of 156 men, including 34 with prostate cancer and 122 with benign prostate were enrolled in this study. Computed tomography, conventional magnetic resonance imaging, diffusion-weighted imaging, and susceptibility-weighted imaging were performed on all the patients. One hundred and twelve prostatic calcifications were detected in 87 patients. The sensitivities and specificities of the conventional magnetic resonance imaging, apparent diffusion coefficient, and susceptibility-filtered phase images in detecting prostate cancer and prostatic calcifications were calculated. McNemar's Chi-square test was used to compare the differences in sensitivities and specificities between the techniques. The results showed that the sensitivity and specificity of susceptibility-filtered phase images in detecting prostatic cancer were greater than that of conventional magnetic resonance imaging and apparent diffusion coefficient (P < 0.05). In addition, the sensitivity and specificity of susceptibility-filtered phase images in detecting prostatic calcifications were comparable to that of computed tomography and greater than that of conventional magnetic resonance imaging and apparent diffusion coefficient (P < 0.05). Given the high incidence of susceptibility-weighted imaging (SWI) abnormality in prostate cancer, we conclude that susceptibility-weighted imaging is more sensitive and specific than conventional magnetic resonance imaging, diffusion-weighted imaging, and computed tomography in detecting prostate cancer. Furthermore, susceptibility-weighted imaging can identify prostatic calcifications similar to computed tomography, and it is much better than conventional magnetic resonance imaging and diffusion-weighted imaging.

  13. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  15. Sensitivity of the Lidar ratio to changes in size distribution and index of refraction

    NASA Technical Reports Server (NTRS)

    Evans, B. T. N.

    1986-01-01

    In order to invert lidar signals to obtain reliable extinction coefficients, sigma, a relationship between sigma and the backscatter coefficient, beta, must be given. These two coefficients are linearly related if the complex index of refraction, m, particle shape size distribution, N, does not change along the path illuminated by the laser beam. This, however, is generally not the case. An extensive Mie computation of the lidar ratio R = beta/sigma and the sensitivity of R to the changes in a parametric space defined by N and m were examined.

  16. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  17. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  18. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.

  19. Effect of mesh distortion on the accuracy of transverse shear stresses and their sensitivity coefficients in multilayered composites

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Yong H.

    1995-01-01

    A study is made of the effect of mesh distortion on the accuracy of transverse shear stresses and their first-order and second-order sensitivity coefficients in multilayered composite panels subjected to mechanical and thermal loads. The panels are discretized by using a two-field degenerate solid element, with the fundamental unknowns consisting of both displacement and strain components, and the displacement components having a linear variation throughout the thickness of the laminate. A two-step computational procedure is used for evaluating the transverse shear stresses. In the first step, the in-plane stresses in the different layers are calculated at the numerical quadrature points for each element. In the second step, the transverse shear stresses are evaluated by using piecewise integration, in the thickness direction, of the three-dimensional equilibrium equations. The same procedure is used for evaluating the sensitivity coefficients of transverse shear stresses. Numerical results are presented showing no noticeable degradation in the accuracy of the in-plane stresses and their sensitivity coefficients with mesh distortion. However, such degradation is observed for the transverse shear stresses and their sensitivity coefficients. The standard of comparison is taken to be the exact solution of the three-dimensional thermoelasticity equations of the panel.

  20. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  1. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  2. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  3. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  4. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  5. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  6. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  7. Environmental and Hydroclimatic Sensitivities of Greenhouse Gas (GHG) Fluxes from Coastal Wetlands

    NASA Astrophysics Data System (ADS)

    Abdul-Aziz, O. I.; Ishtiaq, K. S.

    2016-12-01

    We computed the reference environmental and hydroclimatic sensitivities of the greenhouse gas (GHG) fluxes (CO2 and CH4) from coastal salt marshes. Non-linear partial least squares regression models of CO2 (net uptake) and CH4 (net emissions) fluxes were developed with a bootstrap resampling approach using the photosynthetically active radiation (PAR), air and soil temperatures, water height, soil moisture, porewater salinity, and pH as predictors. Analytical sensitivity coefficients of different predictors were then analytically derived from the estimated models. The numerical sensitivities of the dominant drivers were determined by perturbing the variables individually and simultaneously to compute their individual and combined (respectively) effects on the GHG fluxes. Four tidal wetlands of Waquoit Bay, MA — incorporating a gradient in land-use, salinity and hydrology — were considered as the case study sites. The wetlands were dominated by native Spartina Alterniflora, and characterized by high salinity and frequent flooding. Results indicated a high sensitivity of CO2 fluxes to temperature and PAR, a moderate sensitivity to soil salinity and water height, and a weak sensitivity to pH and soil moisture. In contrast, the CH4 fluxes were more sensitive to temperature and salinity, compared to that of PAR, pH, and hydrologic variables. The estimated sensitivities and mechanistic insights can aid the management of coastal carbon under a changing climate and environment. The sensitivity coefficients also indicated the most dominant drivers of GHG fluxes for the development of a parsimonious predictive model.

  8. Quantifying Uncertainties in the Thermo-Mechanical Properties of Particulate Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Murthy, Pappu L. N.

    1999-01-01

    The present paper reports results from a computational simulation of probabilistic particulate reinforced composite behavior. The approach consists use of simplified micromechanics of particulate reinforced composites together with a Fast Probability Integration (FPI) technique. Sample results are presented for a Al/SiC(sub p)(silicon carbide particles in aluminum matrix) composite. The probability density functions for composite moduli, thermal expansion coefficient and thermal conductivities along with their sensitivity factors are computed. The effect of different assumed distributions and the effect of reducing scatter in constituent properties on the thermal expansion coefficient are also evaluated. The variations in the constituent properties that directly effect these composite properties are accounted for by assumed probabilistic distributions. The results show that the present technique provides valuable information about the scatter in composite properties and sensitivity factors, which are useful to test or design engineers.

  9. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  10. Accelerating activity coefficient calculations using multicore platforms, and profiling the energy use resulting from such calculations.

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Bane, Michael

    2017-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.

  11. A comparison of experimental and theoretical results for leakage, pressure distribution, and rotordynamic coefficients for annular gas seals

    NASA Technical Reports Server (NTRS)

    Nicks, C. O.; Childs, D. W.

    1984-01-01

    The importance of seal behavior in rotordynamics is discussed and current annular seal theory is reviewed. A Nelson's analytical-computational method for determining rotordynamic coefficients for this type of compressible-flow seal is outlined. Various means for the experimental identification of the dynamic coefficients are given, and the method employed at the Texas A and M University (TAMU) test facility is explained. The TAMU test apparatus is described, and the test procedures are discussed. Experimental results, including leakage, entrance-loss coefficients, pressure distributions, and rotordynamic coefficients for a smooth and a honeycomb constant-clearance seal are presented and compared to theoretical results from Nelson's analysis. The results for both seals show little sensitivity to the running speed over the test range. Agreement between test results and theory for leakage through the seal is satisfactory. Test results for direct stiffness show a greater sensitivity to fluid pre-rotation than predicted. Results also indicate that the deliberately roughened surface of the honeycomb seal provides improved stability versus the smooth seal.

  12. An investigation of angular stiffness and damping coefficients of an axial spline coupling in high-speed rotating machinery

    NASA Technical Reports Server (NTRS)

    Ku, C.-P. Roger; Walton, James F., Jr.; Lund, Jorgen W.

    1994-01-01

    This paper provided an opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. A unique test methodology and data reduction procedures were developed. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotor dynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. In addition, sensitivity analyses were performed to show that the accuracy of the dynamic coefficients do not rely on the accuracy of the data reduction procedures.

  13. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  14. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  15. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  16. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  17. Flow analysis and design optimization methods for nozzle-afterbody of a hypersonic vehicle

    NASA Technical Reports Server (NTRS)

    Baysal, O.

    1992-01-01

    This report summarizes the methods developed for the aerodynamic analysis and the shape optimization of the nozzle-afterbody section of a hypersonic vehicle. Initially, exhaust gases were assumed to be air. Internal-external flows around a single scramjet module were analyzed by solving the 3D Navier-Stokes equations. Then, exhaust gases were simulated by a cold mixture of Freon and Ar. Two different models were used to compute these multispecies flows as they mixed with the hypersonic airflow. Surface and off-surface properties were successfully compared with the experimental data. The Aerodynamic Design Optimization with Sensitivity analysis was then developed. Pre- and postoptimization sensitivity coefficients were derived and used in this quasi-analytical method. These coefficients were also used to predict inexpensively the flow field around a changed shape when the flow field of an unchanged shape was given. Starting with totally arbitrary initial afterbody shapes, independent computations were converged to the same optimum shape, which rendered the maximum axial thrust.

  18. Flow analysis and design optimization methods for nozzle afterbody of a hypersonic vehicle

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1991-01-01

    This report summarizes the methods developed for the aerodynamic analysis and the shape optimization of the nozzle-afterbody section of a hypersonic vehicle. Initially, exhaust gases were assumed to be air. Internal-external flows around a single scramjet module were analyzed by solving the three dimensional Navier-Stokes equations. Then, exhaust gases were simulated by a cold mixture of Freon and Argon. Two different models were used to compute these multispecies flows as they mixed with the hypersonic airflow. Surface and off-surface properties were successfully compared with the experimental data. In the second phase of this project, the Aerodynamic Design Optimization with Sensitivity analysis (ADOS) was developed. Pre and post optimization sensitivity coefficients were derived and used in this quasi-analytical method. These coefficients were also used to predict inexpensively the flow field around a changed shape when the flow field of an unchanged shape was given. Starting with totally arbitrary initial afterbody shapes, independent computations were converged to the same optimum shape, which rendered the maximum axial thrust.

  19. Anomalous phosphine sensitivity coefficients as probes for a possible variation of the proton-to-electron mass ratio

    NASA Astrophysics Data System (ADS)

    Owens, A.; Yurchenko, S. N.; Špirko, V.

    2018-02-01

    A robust variational approach is used to investigate the sensitivity of the rotation-vibration spectrum of phosphine (PH3) to a possible cosmological variation of the proton-to-electron mass ratio, μ. Whilst the majority of computed sensitivity coefficients, T, involving the low-lying vibrational states acquire the expected values of T ≈ -1 and T ≈ -1/2 for rotational and ro-vibrational transitions, respectively, anomalous sensitivities are uncovered for the A1 - A2 splittings in the ν2/ν4, ν1/ν3 and 2ν _4^{ℓ=0}/2ν _4^{ℓ=2} manifolds of PH3. A pronounced Coriolis interaction between these states in conjunction with accidentally degenerate A1 and A2 energy levels produces a series of enhanced sensitivity coefficients. Phosphine is expected to occur in a number of different astrophysical environments and has potential for investigating a drifting constant. Furthermore, the displayed behaviour hints at a wider trend in molecules of C_{3v}(M) symmetry, thus demonstrating that the splittings induced by higher-order ro-vibrational interactions are well suited for probing μ in other symmetric top molecules in space, since these low-frequency transitions can be straightforwardly detected by radio telescopes.

  20. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.

  1. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  2. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  3. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  4. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  5. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  6. Development of a model and computer code to describe solar grade silicon production processes. [phase changes in chemical reactors

    NASA Technical Reports Server (NTRS)

    Gould, R. K.

    1978-01-01

    Mechanisms for the SiCl4/Na and SiF4/Na reaction systems were examined. Reaction schemes which include 25 elementary reactions were formulated for each system and run to test the sensitivity of the computed concentration and temperature profiles to the values given estimated rate coefficients. It was found that, for SiCl4/Na, the rate of production of free Si is largely mixing-limited for reasonable rate coefficient estimates. For the SiF4/Na system the results indicate that the endothermicities of many of the reactions involved in producing Si from SiF4/Na cause this system to be chemistry-limited rather than mixing-limited.

  7. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  8. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  9. The theory and measurement of noncoherent microwave scattering parameters. [for remote sensing of scenes via radar scatterometry

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.; Fung, A. K.

    1977-01-01

    The radar equation for incoherent scenes is derived and scattering coefficients are introduced in a systematic way to account for the complete interaction between the incident wave and the random scene. Intensity (power) and correlation techniques similar to that for coherent targets are proposed to measure all the scattering parameters. The sensitivity of the intensity technique to various practical realizations of the antenna polarization requirements is evaluated by means of computer simulated measurements, conducted with a scattering characteristic similar to that of the sea. It was shown that for scenes satisfying reciprocity one must admit three new cross-correlation scattering coefficients in addition to the commonly measured autocorrelation coefficients.

  10. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  11. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  12. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  13. Reduction technique for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1995-01-01

    A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.

  14. Sensitivity of light interaction computer model to the absorption properties of skin

    NASA Astrophysics Data System (ADS)

    Karsten, A. E.; Singh, A.

    2011-06-01

    Light based treatments offer major benefits to patients. Many of the light based treatments or diagnostic techniques need to penetrate the skin to reach the site of interest. Human skin is a highly scattering medium and the melanin in the epidermal layer of the skin is a major absorber of light in the visible and near infrared wavelength bands. The effect of increasing absorption in the epidermis is tested on skin simulating phantoms as well as on a computer model. Changing the absorption coefficient between 0.1 mm-1 and 1.0 mm-1 resulted in a decrease of light reaching 1 mm into the sample. Transmission through a 1 mm thick sample decreased from 48% to 13% and from 31% to 2% for the different scattering coefficients.

  15. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  16. Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.

    PubMed

    Ingalls, Brian; Mincheva, Maya; Roussel, Marc R

    2017-07-01

    A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.

  17. Ab Initio Theoretical Studies on the Kinetics of Hydrogen Abstraction Type Reactions of Hydroxyl Radicals with CH3CCl2F and CH3CClF2

    NASA Astrophysics Data System (ADS)

    Saheb, Vahid; Maleki, Samira

    2018-03-01

    The hydrogen abstraction reactions from CH3Cl2F (R-141b) and CH3CClF2 (R-142b) by OH radicals are studied theoretically by semi-classical transition state theory. The stationary points for the reactions are located by using KMLYP density functional method along with 6-311++G(2 d,2 p) basis set and MP2 method along with 6-311+G( d, p) basis set. Single-point energy calculations are performed by the CBS-Q and G4 combination methods on the geometries optimized at the KMLYP/6-311++G(2 d,2 p) level of theory. Vibrational anharmonicity coefficients, x ij , which are needed for semi-classical transition state theory calculations, are computed at the KMLYP/6-311++G(2 d,2 p) and MP2/6-311+G( d, p) levels of theory. The computed barrier heights are slightly sensitive to the quantum-chemical method. Thermal rate coefficients are computed over the temperature range from 200 to 2000 K and they are shown to be in accordance with available experimental data. On the basis of the computed rate coefficients, the tropospheric lifetime of the CH3CCl2F and CH3CClF2 are estimated to be about 6.5 and 12.0 years, respectively.

  18. A computer model for one-dimensional mass and energy transport in and around chemically reacting particles, including complex gas-phase chemistry, multicomponent molecular diffusion, surface evaporation, and heterogeneous reaction

    NASA Technical Reports Server (NTRS)

    Cho, S. Y.; Yetter, R. A.; Dryer, F. L.

    1992-01-01

    Various chemically reacting flow problems highlighting chemical and physical fundamentals rather than flow geometry are presently investigated by means of a comprehensive mathematical model that incorporates multicomponent molecular diffusion, complex chemistry, and heterogeneous processes, in the interest of obtaining sensitivity-related information. The sensitivity equations were decoupled from those of the model, and then integrated one time-step behind the integration of the model equations, and analytical Jacobian matrices were applied to improve the accuracy of sensitivity coefficients that are calculated together with model solutions.

  19. Measurement properties of gingival biotype evaluation methods.

    PubMed

    Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga

    2018-06-01

    There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.

  20. New type side weir discharge coefficient simulation using three novel hybrid adaptive neuro-fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Zaji, Amir Hossein

    2018-03-01

    In many hydraulic structures, side weirs have a critical role. Accurately predicting the discharge coefficient is one of the most important stages in the side weir design process. In the present paper, a new high efficient side weir is investigated. To simulate the discharge coefficient of these side weirs, three novel soft computing methods are used. The process includes modeling the discharge coefficient with the hybrid Adaptive Neuro-Fuzzy Interface System (ANFIS) and three optimization algorithms, namely Differential Evaluation (ANFIS-DE), Genetic Algorithm (ANFIS-GA) and Particle Swarm Optimization (ANFIS-PSO). In addition, sensitivity analysis is done to find the most efficient input variables for modeling the discharge coefficient of these types of side weirs. According to the results, the ANFIS method has higher performance when using simpler input variables. In addition, the ANFIS-DE with RMSE of 0.077 has higher performance than the ANFIS-GA and ANFIS-PSO methods with RMSE of 0.079 and 0.096, respectively.

  1. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-07-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  2. Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, E. L.; Molod, A.

    2014-01-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  3. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-03-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  4. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  5. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  6. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  7. Mechanical design and analysis of a low beta squeezed half-wave resonator

    NASA Astrophysics Data System (ADS)

    He, Shou-Bo; Zhang, Cong; Yue, Wei-Ming; Wang, Ruo-Xu; Xu, Meng-Xin; Wang, Zhi-Jun; Huang, Shi-Chun; Huang, Yu-Lu; Jiang, Tian-Cai; Wang, Feng-Feng; Zhang, Sheng-Xue; He, Yuan; Zhang, Sheng-Hu; Zhao, Hong-Wei

    2014-08-01

    A superconducting squeezed type half-wave resonator (HWR) of β=0.09 has been developed at the Institute of Modern Physics, Lanzhou. In this paper, a basic design is presented for the stiffening structure for the detuning effect caused by helium pressure and Lorentz force. The mechanical modal analysis has been investigated the with finite element method (FEM). Based on these considerations, a new stiffening structure is proposed for the HWR cavity. The computation results concerning the frequency shift show that the low beta HWR cavity with new stiffening structure has low frequency sensitivity coefficient df/dp and Lorentz force detuning coefficient KL, and stable mechanical properties.

  8. On the inverse Magnus effect for flow past a rotating cylinder

    NASA Astrophysics Data System (ADS)

    John, Benzi; Gu, Xiao-Jun; Barber, Robert W.; Emerson, David R.

    2016-11-01

    Flow past a rotating cylinder has been investigated using the direct simulation Monte Carlo method. The study focuses on the occurrence of the inverse Magnus effect under subsonic flow conditions. In particular, the variations in the coefficients of lift and drag have been investigated as a function of the Knudsen and Reynolds numbers. Additionally, a temperature sensitivity study has been carried out to assess the influence of the wall temperature on the computed aerodynamic coefficients. It has been found that both the Reynolds number and the cylinder wall temperature significantly affect the drag as well as the onset of lift inversion in the transition flow regime.

  9. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  10. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  11. A combined three-dimensional in vitro–in silico approach to modelling bubble dynamics in decompression sickness

    PubMed Central

    Stride, E.; Cheema, U.

    2017-01-01

    The growth of bubbles within the body is widely believed to be the cause of decompression sickness (DCS). Dive computer algorithms that aim to prevent DCS by mathematically modelling bubble dynamics and tissue gas kinetics are challenging to validate. This is due to lack of understanding regarding the mechanism(s) leading from bubble formation to DCS. In this work, a biomimetic in vitro tissue phantom and a three-dimensional computational model, comprising a hyperelastic strain-energy density function to model tissue elasticity, were combined to investigate key areas of bubble dynamics. A sensitivity analysis indicated that the diffusion coefficient was the most influential material parameter. Comparison of computational and experimental data revealed the bubble surface's diffusion coefficient to be 30 times smaller than that in the bulk tissue and dependent on the bubble's surface area. The initial size, size distribution and proximity of bubbles within the tissue phantom were also shown to influence their subsequent dynamics highlighting the importance of modelling bubble nucleation and bubble–bubble interactions in order to develop more accurate dive algorithms. PMID:29263127

  12. Efficient computation of kinship and identity coefficients on large pedigrees.

    PubMed

    Cheng, En; Elliott, Brendan; Ozsoyoglu, Z Meral

    2009-06-01

    With the rapidly expanding field of medical genetics and genetic counseling, genealogy information is becoming increasingly abundant. An important computation on pedigree data is the calculation of identity coefficients, which provide a complete description of the degree of relatedness of a pair of individuals. The areas of application of identity coefficients are numerous and diverse, from genetic counseling to disease tracking, and thus, the computation of identity coefficients merits special attention. However, the computation of identity coefficients is not done directly, but rather as the final step after computing a set of generalized kinship coefficients. In this paper, we first propose a novel Path-Counting Formula for calculating generalized kinship coefficients, which is motivated by Wright's path-counting method for computing inbreeding coefficient. We then present an efficient and scalable scheme for calculating generalized kinship coefficients on large pedigrees using NodeCodes, a special encoding scheme for expediting the evaluation of queries on pedigree graph structures. Furthermore, we propose an improved scheme using Family NodeCodes for the computation of generalized kinship coefficients, which is motivated by the significant improvement of using Family NodeCodes for inbreeding coefficient over the use of NodeCodes. We also perform experiments for evaluating the efficiency of our method, and compare it with the performance of the traditional recursive algorithm for three individuals. Experimental results demonstrate that the resulting scheme is more scalable and efficient than the traditional recursive methods for computing generalized kinship coefficients.

  13. Calibration test of the temperature and strain sensitivity coefficient in regional reference grating method

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo

    2014-12-01

    In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.

  14. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  15. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  16. Molecular dynamics simulation of real-fluid mutual diffusion coefficients with the Lennard-Jones potential model

    NASA Astrophysics Data System (ADS)

    Stoker, J. M.; Rowley, R. L.

    1989-09-01

    Mutual diffusion coefficients for selected alkanes in carbon tetrachloride were calculated using molecular dynamics and Lennard-Jones (LJ) potentials. Use of effective spherical LJ parameters is desirable when possible for two reasons: (i) computer time is saved due to the simplicity of the model and (ii) the number of parameters in the model is kept to a minimum. Results of this study indicate that mutual diffusivity is particularly sensitive to the molecular size cross parameter, σ12, and that the commonly used Lorentz-Berthelot rules are inadequate for mixtures in which the component structures differ significantly. Good agreement between simulated and experimental mutual diffusivities is obtained with a combining rule for σ12 which better represents these asymmetric mixtures using pure component LJ parameters obtained from self-diffusion coefficient data. The effect of alkane chain length on the mutual diffusion coefficient is correctly predicted. While the effects of alkane branching upon the diffusion coefficient are comparable in size to the uncertainty of these calculations, the qualitative trend due to branching is also correctly predicted by the MD results.

  17. A comparison of experimental and theoretical results for leakage, pressure gradients, and rotordynamic coefficients for tapered annular gas seal

    NASA Technical Reports Server (NTRS)

    Elrod, D. A.; Childs, D. W.

    1986-01-01

    A brief review of current annular seal theory and a discussion of the predicted effect on stiffness of tapering the seal stator are presented. An outline of Nelson's analytical-computational method for determining rotordynamic coefficients for annular compressible-flow seals is included. Modifications to increase the maximum rotor speed of an existing air-seal test apparatus at Texas A&M University are described. Experimental results, including leakage, entrance-loss coefficients, pressure distributions, and normalized rotordynamic coefficients, are presented for four convergent-tapered, smooth-rotor, smooth-stator seals. A comparison of the test results shows that an inlet-to-exit clearance ratio of 1.5 to 2.0 provides the maximum direct stiffness, a clearance ratio of 2.5 provides the greatest stability, and a clearance ratio of 1.0 provides the least stability. The experimental results are compared to theoretical results from Nelson's analysis with good agreement. Test results for cross-coupled stiffness show less sensitivity of fluid prerotation than predicted.

  18. High Reynolds number analysis of flat plate and separated afterbody flow using non-linear turbulence models

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1996-01-01

    The ability of the three-dimensional Navier-Stokes method, PAB3D, to simulate the effect of Reynolds number variation using non-linear explicit algebraic Reynolds stress turbulence modeling was assessed. Subsonic flat plate boundary-layer flow parameters such as normalized velocity distributions, local and average skin friction, and shape factor were compared with DNS calculations and classical theory at various local Reynolds numbers up to 180 million. Additionally, surface pressure coefficient distributions and integrated drag predictions on an axisymmetric nozzle afterbody were compared with experimental data from 10 to 130 million Reynolds number. The high Reynolds data was obtained from the NASA Langley 0.3m Transonic Cryogenic Tunnel. There was generally good agreement of surface static pressure coefficients between the CFD and measurement. The change in pressure coefficient distributions with varying Reynolds number was similar to the experimental data trends, though slightly over-predicting the effect. The computational sensitivity of viscous modeling and turbulence modeling are shown. Integrated afterbody pressure drag was typically slightly lower than the experimental data. The change in afterbody pressure drag with Reynolds number was small both experimentally and computationally, even though the shape of the distribution was somewhat modified with Reynolds number.

  19. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  20. The Feasibility of Classifying Breast Masses Using a Computer-Assisted Diagnosis (CAD) System Based on Ultrasound Elastography and BI-RADS Lexicon.

    PubMed

    Fleury, Eduardo F C; Gianini, Ana Claudia; Marcomini, Karem; Oliveira, Vilmar

    2018-01-01

    To determine the applicability of a computer-aided diagnostic system strain elastography system for the classification of breast masses diagnosed by ultrasound and scored using the criteria proposed by the breast imaging and reporting data system ultrasound lexicon and to determine the diagnostic accuracy and interobserver variability. This prospective study was conducted between March 1, 2016, and May 30, 2016. A total of 83 breast masses subjected to percutaneous biopsy were included. Ultrasound elastography images before biopsy were interpreted by 3 radiologists with and without the aid of computer-aided diagnostic system for strain elastography. The parameters evaluated by each radiologist results were sensitivity, specificity, and diagnostic accuracy, with and without computer-aided diagnostic system for strain elastography. Interobserver variability was assessed using a weighted κ test and an intraclass correlation coefficient. The areas under the receiver operating characteristic curves were also calculated. The areas under the receiver operating characteristic curve were 0.835, 0.801, and 0.765 for readers 1, 2, and 3, respectively, without computer-aided diagnostic system for strain elastography, and 0.900, 0.926, and 0.868, respectively, with computer-aided diagnostic system for strain elastography. The intraclass correlation coefficient between the 3 readers was 0.6713 without computer-aided diagnostic system for strain elastography and 0.811 with computer-aided diagnostic system for strain elastography. The proposed computer-aided diagnostic system for strain elastography system has the potential to improve the diagnostic performance of radiologists in breast examination using ultrasound associated with elastography.

  1. Computations of Viking Lander Capsule Hypersonic Aerodynamics with Comparisons to Ground and Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.

    2006-01-01

    Comparisons are made between the LAURA Navier-Stokes code and Viking Lander Capsule hypersonic aerodynamics data from ground and flight measurements. Wind tunnel data are available for a 3.48 percent scale model at Mach 6 and a 2.75 percent scale model at Mach 10.35, both under perfect gas air conditions. Viking Lander 1 aerodynamics flight data also exist from on-board instrumentation for velocities between 2900 and 4400 m/sec (Mach 14 to 23.3). LAURA flowfield solutions are obtained for the geometry as tested or flown, including sting effects at tunnel conditions and finite-rate chemistry effects in flight. Using the flight vehicle center-of-gravity location (trim angle approx. equals -11.1 deg), the computed trim angle at tunnel conditions is within 0.31 degrees of the angle derived from Mach 6 data and 0.13 degrees from the Mach 10.35 trim angle. LAURA Mach 6 trim lift and drag force coefficients are within 2 percent of measured data, and computed trim lift-to-drag ratio is within 4 percent of the data. Computed trim lift and drag force coefficients at Mach 10.35 are within 5 percent and 3 percent, respectively, of wind tunnel data. Computed trim lift-to-drag ratio is within 2 percent of the Mach 10.35 data. Using the nominal density profile and center-of-gravity location, LAURA trim angle at flight conditions is within 0.5 degrees of the total angle measured from on-board instrumentation. LAURA trim lift and drag force coefficients at flight conditions are within 7 and 5 percent, respectively, of the flight data. Computed trim lift-to-drag ratio is within 4 percent of the data. Computed aerodynamics sensitivities to center-of-gravity location, atmospheric density, and grid refinement are generally small. The results will enable a better estimate of aerodynamics uncertainties for future Mars entry vehicles where non-zero angle-of-attack is required.

  2. Performance of e-ASPECTS software in comparison to that of stroke physicians on assessing CT scans of acute ischemic stroke patients.

    PubMed

    Herweh, Christian; Ringleb, Peter A; Rauch, Geraldine; Gerry, Steven; Behrens, Lars; Möhlenbruch, Markus; Gottorf, Rebecca; Richter, Daniel; Schieber, Simon; Nagel, Simon

    2016-06-01

    The Alberta Stroke Program Early CT score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We compared the performance of the e-ASPECTS software with those of stroke physicians at different professional levels. The baseline computed tomography scans of acute stroke patients, in whom computed tomography and diffusion-weighted imaging scans were obtained less than two hours apart, were retrospectively scored by e-ASPECTS as well as by three stroke experts and three neurology trainees blinded to any clinical information. The ground truth was defined as the ASPECTS on diffusion-weighted imaging scored by another two non-blinded independent experts on consensus basis. Sensitivity and specificity in an ASPECTS region-based and an ASPECTS score-based analysis as well as receiver-operating characteristic curves, Bland-Altman plots with mean score error, and Matthews correlation coefficients were calculated. Comparisons were made between the human scorers and e-ASPECTS with diffusion-weighted imaging being the ground truth. Two methods for clustered data were used to estimate sensitivity and specificity in the region-based analysis. In total, 34 patients were included and 680 (34 × 20) ASPECTS regions were scored. Mean time from onset to computed tomography was 172 ± 135 min and mean time difference between computed tomographyand magnetic resonance imaging was 41 ± 31 min. The region-based sensitivity (46.46% [CI: 30.8;62.1]) of e-ASPECTS was better than three trainees and one expert (p ≤ 0.01) and not statistically different from another two experts. Specificity (94.15% [CI: 91.7;96.6]) was lower than one expert and one trainee (p < 0.01) and not statistically different to the other four physicians. e-ASPECTS had the best Matthews correlation coefficient of 0.44 (experts: 0.38 ± 0.08 and trainees: 0.19 ± 0.05) and the lowest mean score error of 0.56 (experts: 1.44 ± 1.79 and trainees: 1.97 ± 2.12). e-ASPECTS showed a similar performance to that of stroke experts in the assessment of brain computed tomographys of acute ischemic stroke patients with the Alberta Stroke Program Early CT score method. © 2016 World Stroke Organization.

  3. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  4. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  5. Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review

    NASA Technical Reports Server (NTRS)

    Tripp, John S.

    1999-01-01

    This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.

  6. Lecture Notes on Criticality Safety Validation Using MCNP & Whisper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less

  7. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  8. Description of CASCOMP Comprehensive Airship Sizing and Performance Computer Program, Volume 2

    NASA Technical Reports Server (NTRS)

    Davis, J.

    1975-01-01

    The computer program CASCOMP, which may be used in comparative design studies of lighter than air vehicles by rapidly providing airship size and mission performance data, was prepared and documented. The program can be used to define design requirements such as weight breakdown, required propulsive power, and physical dimensions of airships which are designed to meet specified mission requirements. The program is also useful in sensitivity studies involving both design trade-offs and performance trade-offs. The input to the program primarily consists of a series of single point values such as hull overall fineness ratio, number of engines, airship hull and empennage drag coefficients, description of the mission profile, and weights of fixed equipment, fixed useful load and payload. In order to minimize computation time, the program makes ample use of optional computation paths.

  9. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  11. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  12. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  13. An Investigation on the Spatial Variability of Manning Roughness Coefficients in Continental-scale River Routing Simulations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Hong, Y.; Lei, X.; Leung, L. R.; Li, H. Y.; Getirana, A.

    2017-12-01

    As one essential component of the Earth system modeling, the continental-scale river routing computation plays an important role in applications of Earth system models, such as evaluating the impacts of the global change on water resources and flood hazards. The streamflow timing, which depends on the modeled flow velocities, can be an important aspect of the model results. River flow velocities have been estimated by using the Manning's equation where the Manning roughness coefficient is a key and sensitive parameter. In some early continental-scale studies, the Manning coefficient was determined with simplified methods, such as using a constant value for the entire basin. However, large spatial variability is expected in the Manning coefficients for the numerous channels composing the river network in distributed continental-scale hydrologic modeling. In the application of a continental-scale river routing model in the Amazon Basin, we use spatially varying Manning coefficients dependent on channel sizes and attempt to represent the dominant spatial variability of Manning coefficients. Based on the comparisons of simulation results with in situ streamflow records and remotely sensed river stages, we investigate the comparatively optimal Manning coefficients and explicitly demonstrate the advantages of using spatially varying Manning coefficients. The understanding obtained in this study could be helpful in the modeling of surface hydrology at regional to continental scales.

  14. Development of a Microsoft Excel tool for applying a factor retention criterion of a dimension coefficient to a survey on patient safety culture.

    PubMed

    Chien, Tsair-Wei; Shao, Yang; Jen, Dong-Hui

    2017-10-27

    Many quality-of-life studies have been conducted in healthcare settings, but few have used Microsoft Excel to incorporate Cronbach's α with dimension coefficient (DC) for describing a scale's characteristics. To present a computer module that can report a scale's validity, we manipulated datasets to verify a DC that can be used as a factor retention criterion for demonstrating its usefulness in a patient safety culture survey (PSC). Microsoft Excel Visual Basic for Applications was used to design a computer module for simulating 2000 datasets fitting the Rasch rating scale model. The datasets consisted of (i) five dual correlation coefficients (correl. = 0.3, 0.5, 0.7, 0.9, and 1.0) on two latent traits (i.e., true scores) following a normal distribution and responses to their respective 1/3 and 2/3 items in length; (ii) 20 scenarios of item lengths from 5 to 100; and (iii) 20 sample sizes from 50 to 1000. Each item containing 5-point polytomous responses was uniformly distributed in difficulty across a ± 2 logit range. Three methods (i.e., dimension interrelation ≥0.7, Horn's parallel analysis (PA) 95% confidence interval, and individual random eigenvalues) were used for determining one factor to retain. DC refers to the binary classification (1 as one factor and 0 as many factors) used for examining accuracy with the indicators sensitivity, specificity, and area under receiver operating characteristic curve (AUC). The scale's reliability and DC were simultaneously calculated for each simulative dataset. PSC real data were demonstrated with DC to interpret reports of the unit-based construct validity using the author-made MS Excel module. The DC method presented accurate sensitivity (=0.96), specificity (=0.92) with a DC criterion (≥0.70), and AUC (=0.98) that were higher than those of the two PA methods. PA combined with DC yielded good sensitivity (=0.96), specificity (=1.0) with a DC criterion (≥0.70), and AUC (=0.99). Advances in computer technology may enable healthcare users familiar with MS Excel to apply DC as a factor retention criterion for determining a scale's unidimensionality and evaluating a scale's quality.

  15. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  16. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  17. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other platforms with FORTRAN compilers which support NAMELIST input. LSENS required 4Mb of RAM under SunOS 4.1.1 and 3.4Mb of RAM under VMS 5.5.1. The standard distribution medium for LSENS is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 1600 BPI 9-track magnetic tape or a TK50 tape cartridge in DEC VAX BACKUP format. Alternate distribution media and formats are available upon request. LSENS was developed in 1992.

  18. Correlations between symptoms, nasal endoscopy, and in-office computed tomography in post-surgical chronic rhinosinusitis patients.

    PubMed

    Ryan, William R; Ramachandra, Tara; Hwang, Peter H

    2011-03-01

    To determine correlations between symptoms, nasal endoscopy findings, and computed tomography (CT) scan findings in post-surgical chronic rhinosinusitis (CRS) patients. Cross-sectional. A total of 51 CRS patients who had undergone endoscopic sinus surgery (ESS) completed symptom questionnaires, underwent endoscopy, and received an in-office sinus CT scan during one clinic visit. For metrics, we used the Sinonasal Outcomes Test-20 (SNOT-20) questionnaire, visual analog symptom scale (VAS), Lund-Kennedy endoscopy scoring scale, and Lund-MacKay (LM) CT scoring scale. We determined Pearson correlation coefficients, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) between scores for symptoms, endoscopy, and CT. The SNOT-20 score and most VAS symptoms had poor correlation coefficients with both endoscopy and CT scores (0.03-0.24). Nasal drainage of pus, nasal congestion, and impaired sense of smell had moderate correlation coefficients with endoscopy and CT (0.24-0.42). Endoscopy had a strong correlation coefficient with CT (0.76). Drainage, edema, and polyps had strong correlation coefficients with CT (0.80, 0.69, and 0.49, respectively). Endoscopy had a PPV of 92.5% and NPV of 45.5% for detecting an abnormal sinus CT (LM score ≥1). In post-ESS CRS patients, most symptoms do not correlate well with either endoscopy or CT findings. Endoscopy and CT scores correlate well. Abnormal endoscopy findings have the ability to confidently rule in the presence of CT opacification, thus validating the importance of endoscopy in clinical decision making. However, a normal endoscopy cannot assure a normal CT. Thus, symptoms, endoscopy, and CT are complementary in the evaluation of the post-ESS CRS patient. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc., Rhinological, and Otological Society, Inc.

  19. Sensitivity analysis for the coupling of a subglacial hydrology model with a 3D ice-sheet model.

    NASA Astrophysics Data System (ADS)

    Bertagna, L.; Perego, M.; Gunzburger, M.; Hoffman, M. J.; Price, S. F.

    2017-12-01

    When studying the movement of ice sheets, one of the most important factors that influence the velocity of the ice is the amount of friction against the bedrock. Usually, this is modeled by a friction coefficient that may depend on the bed geometry and other quantities, such as the temperature and/or water pressure at the ice-bedrock interface. These quantities are often assumed to be known (either by indirect measurements or by means of parameter estimation) and constant in time. Here, we present a 3D computational model for the simulation of the ice dynamics which incorporates a 2D model proposed by Hewitt (2011) for the subglacial water pressure. The hydrology model is fully coupled with the Blatter-Pattyn model for the ice sheet flow, as the subglacial water pressure appears in the expression for the ice friction coefficient, and the ice velocity appears as a source term in the hydrology model. We will present results on real geometries, and perform a sensitivity analysis with respect to the hydrology model parameters.

  20. A novel HMM distributed classifier for the detection of gait phases by means of a wearable inertial sensor network.

    PubMed

    Taborri, Juri; Rossi, Stefano; Palermo, Eduardo; Patanè, Fabrizio; Cappa, Paolo

    2014-09-02

    In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM) applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs) during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98) for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95) when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints.

  1. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  2. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  3. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  4. A Theoretical Reassessment of Microbial Maintenance and Implications for Microbial Ecology Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Gangsheng; Post, Wilfred M

    We attempted to reconcile three microbial maintenance models (Herbert, Pirt, and Compromise) through a critical reassessment. We provided a rigorous proof that the true growth yield coefficient (YG) is the ratio of the specific maintenance rate (a in Herbert) to the maintenance coefficient (m in Pirt). Other findings from this study include: (1) the Compromise model is identical to the Herbert for computing microbial growth and substrate consumption, but it expresses the dependence of maintenance on both microbial biomass and substrate; (2) the maximum specific growth rate in the Herbert ( max,H) is higher than those in the other twomore » models ( max,P and max,C), and the difference is the physiological maintenance factor (mq = a); and (3) the overall maintenance coefficient (mT) is more sensitive to mq than to the specific growth rate ( G) and YG. Our critical reassessment of microbial maintenance provides a new approach for quantifying some important components in soil microbial ecology models.« less

  5. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  6. An efficient algorithm using matrix methods to solve wind tunnel force-balance equations

    NASA Technical Reports Server (NTRS)

    Smith, D. L.

    1972-01-01

    An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.

  7. Computer-aided classification of optical images for diagnosis of osteoarthritis in the finger joints.

    PubMed

    Zhang, Jiang; Wang, James Z; Yuan, Zhen; Sobel, Eric S; Jiang, Huabei

    2011-01-01

    This study presents a computer-aided classification method to distinguish osteoarthritis finger joints from healthy ones based on the functional images captured by x-ray guided diffuse optical tomography. Three imaging features, joint space width, optical absorption, and scattering coefficients, are employed to train a Least Squares Support Vector Machine (LS-SVM) classifier for osteoarthritis classification. The 10-fold validation results show that all osteoarthritis joints are clearly identified and all healthy joints are ruled out by the LS-SVM classifier. The best sensitivity, specificity, and overall accuracy of the classification by experienced technicians based on manual calculation of optical properties and visual examination of optical images are only 85%, 93%, and 90%, respectively. Therefore, our LS-SVM based computer-aided classification is a considerably improved method for osteoarthritis diagnosis.

  8. Molecular engineering of cyanine dyes to design a panchromatic response in Co-sensitized dye-sensitized solar cells

    DOE PAGES

    Pepe, Giulio; Cole, Jacqueline M.; Waddell, Paul G.; ...

    2016-04-05

    Cyanines are optically tunable dyes with high molar extinction coefficients, suitable for applications in co-sensitized dye-sensitized solar cells (DSCs); yet, barely thus applied. This might be due to the lack of a rational molecular design strategy that efficiently exploits cyanine properties. This study computationally re-designs these dyes, to broaden their optical absorption spectrum and create dye···TiO 2 binding and co-sensitization functionality. This is achieved via a stepwise molecular engineering approach. Firstly, the structural and optical properties of four parent dyes are experimentally and computationally investigated: 3,3’-diethyloxacarbocyanine iodide, 3,3’-diethylthiacarbocyanine iodide, 3,3’-diethylthiadicarbocyanine iodide and 3,3’-diethylthiatricarbocyanine iodide. Secondly, the molecules are theoretically modifiedmore » and their energetics are analyzed and compared to the parent dyes. A dye···TiO 2 anchoring group (carboxylic or cyanoacrylic acid), absent from the parent dyes, is chemically substituted at different molecular positions to investigate changes in optical absorption. We find that cyanoacrylic acid substitution at the para-quinoidal position affects the absorption wavelength of all parent dyes, with an optimal bathochromic shift of ca. 40 nm. The theoretical lengthening of the polymethine chain is also shown to effect dye absorption. Two molecularly engineered dyes are proposed as promising co-sensitizers. Finally, corresponding dye···TiO 2 adsorption energy calculations corroborate their applicability, demonstrating the potential of cyanine dyes in DSC research.« less

  9. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2011-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  10. Probabilistic Simulation for Combined Cycle Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  11. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  12. Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.

    1996-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  13. Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups

    NASA Astrophysics Data System (ADS)

    Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella

    2017-04-01

    A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.

  14. Sensitivity of amplitude-phase characteristics of the surface air temperature annual cycle to variations in annual mean temperature

    NASA Astrophysics Data System (ADS)

    Eliseev, A. V.; Mokhov, I. I.; Guseva, M. S.

    2006-05-01

    The ERA40 and NCEP/NCAR data over 1958 1998 were used to estimate the sensitivity of amplitude-phase characteristics (APCs) of the annual cycle (AC) of the surface air temperature (SAT) T s. The results were compared with outputs of the ECHAM4/OPYC3, HadCM3, and INM RAS general circulation models and the IAP RAS climate model of intermediate complexity, which were run with variations in greenhouse gases and sulfate aerosol specified over 1860 2100. The analysis was performed in terms of the linear regression coefficients b of SAT AC APCs on the local annual mean temperature and in terms of the sensitivity characteristic D = br 2, which takes into account not only the linear regression coefficient but also its statistical significance (via the correlation coefficient r). The reanalysis data were used to reveal the features of the tendencies of change in the SAT AC APCs in various regions, including areas near the snow-ice boundary, storm-track ocean regions, large desert areas, and the tropical Pacific. These results agree with earlier observations. The model computations are in fairly good agreement with the reanalysis data in regions of statistically significant variations in SAT AC APCs. The differences between individual models and the reanalysis data can be explained, in particular, in terms of the features of the sea-ice schemes used in the models. Over the land in the middle and high latitudes of the Northern Hemisphere, the absolute values of D for the fall phase time and the interval of exceeding exhibit a positive intermodel correlation with the absolute value of D for the annual-harmonic amplitude. Over the ocean, the models reproducing larger (in modulus) sensitivity parameters of the SAT annual-harmonic amplitude are generally characterized by larger (in modulus) negative sensitivity values of the semiannual-harmonic amplitude T s, 2, especially at latitudes characteristic of the sea-ice boundary. In contrast to the averaged fields of AC APCs and their interannual standard deviations, the sensitivity parameters of the SAT AC APCs on a regional scale vary noticeably for various types of anthropogenic forcing.

  15. Seismic Rheological Model and Reflection Coefficients of the Brittle-Ductile Transition

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Poletto, Flavio

    2013-12-01

    It is well established that the upper—cooler—part of the crust is brittle, while deeper zones present ductile behaviour. In some cases, this brittle-ductile transition is a single seismic reflector with an associated reflection coefficient. We first develop a stress-strain relation including the effects of crust anisotropy, seismic attenuation and ductility in which deformation takes place by shear plastic flow. Viscoelastic anisotropy is based on the eigenstrain model and the Zener and Burgers mechanical models are used to model the effects of seismic attenuation, velocity dispersion, and steady-state creep flow, respectively. The stiffness components of the brittle and ductile media depend on stress and temperature through the shear viscosity, which is obtained by the Arrhenius equation and the octahedral stress criterion. The P- and S-wave velocities decrease as depth and temperature increase due to the geothermal gradient, an effect which is more pronounced for shear waves. We then obtain the reflection and transmission coefficients of a single brittle-ductile interface and of a ductile thin layer. The PP scattering coefficient has a Brewster angle (a sign change) in both cases, and there is substantial PS conversion at intermediate angles. The PP coefficient is sensitive to the layer thickness, unlike the SS coefficient. Thick layers have a well-defined Brewster angle and show higher reflection amplitudes. Finally, we compute synthetic seismograms in a homogeneous medium as a function of temperature.

  16. Thermal sensitivity of elastic coefficients of langasite and langatate.

    PubMed

    Bourquin, Roger; Dulmet, Bernard

    2009-10-01

    Thermal coefficients of elastic constants of langasite and langatate crystals have been determined from frequency-temperature curves of contoured resonators operating in thickness modes. The effect of the trapping of the vibration has been taken into account to improve the accuracy. In a first step, the thermal sensitivities of stiffness coefficients in Lagrangian description are obtained. Thermal sensitivities of the usual elastic constants are further deduced. Predictions of thermally compensated cuts are given.

  17. MIE Theory Sensitivity Studies. The Effects of Aerosol Complex Refractive Index and Size Distribution Variations on Extinction and Absorption Coefficients. Part 1. Tabulated Computational Results

    DTIC Science & Technology

    1977-11-01

    0- - - 5 43- 65 -08-0- o3630S-01 .20,428-03 .69q .200 -. 010 .36506-01 .36129-01 .37754-03 󈨉q .200 -. 025 .36608-01 .35687-01 o81707-03 6494 .200...3177Tý𔃺 t- - .-- 5 -0 1.060 1.600 -6.000 .46412-rl .41401-01 .50114=02 -- •T.73i -. 01 1• 65 !9--- .3 l-Ot-- 7 377-02 1-060 1.700 -. 003 .46537-C1 .44156-01...IAD Reports Control Symbol S OSD- 1366 1. RESEARCH AND DEVELOPMENT TECHNICAL REPORT S•ECOM-DR-77- 5 MIE THEORY SENSITIVITY STUDIES - Si THE EFFECTS OF

  18. Can we calibrate simultaneously groundwater recharge and aquifer hydrodynamic parameters ?

    NASA Astrophysics Data System (ADS)

    Hassane Maina, Fadji; Ackerer, Philippe; Bildstein, Olivier

    2017-04-01

    By groundwater model calibration, we consider here fitting the measured piezometric heads by estimating the hydrodynamic parameters (storage term and hydraulic conductivity) and the recharge. It is traditionally recommended to avoid simultaneous calibration of groundwater recharge and flow parameters because of correlation between recharge and the flow parameters. From a physical point of view, little recharge associated with low hydraulic conductivity can provide very similar piezometric changes than higher recharge and higher hydraulic conductivity. If this correlation is true under steady state conditions, we assume that this correlation is much weaker under transient conditions because recharge varies in time and the parameters do not. Moreover, the recharge is negligible during summer time for many climatic conditions due to reduced precipitation, increased evaporation and transpiration by vegetation cover. We analyze our hypothesis through global sensitivity analysis (GSA) in conjunction with the polynomial chaos expansion (PCE) methodology. We perform GSA by calculating the Sobol indices, which provide a variance-based 'measure' of the effects of uncertain parameters (storage and hydraulic conductivity) and recharge on the piezometric heads computed by the flow model. The choice of PCE has the following two benefits: (i) it provides the global sensitivity indices in a straightforward manner, and (ii) PCE can serve as a surrogate model for the calibration of parameters. The coefficients of the PCE are computed by probabilistic collocation. We perform the GSA on simplified real conditions coming from an already built groundwater model dedicated to a subdomain of the Upper-Rhine aquifer (geometry, boundary conditions, climatic data). GSA shows that the simultaneous calibration of recharge and flow parameters is possible if the calibration is performed over at least one year. It provides also the valuable information of the sensitivity versus time, depending on the aquifer inertia and climatic conditions. The groundwater levels variations during recharge (increase) are sensitive to the storage coefficient whereas the groundwater levels variations after recharge (decrease) are sensitive to the hydraulic conductivity. The performed model calibration on synthetic data sets shows that the parameters and recharge are estimated quite accurately.

  19. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  20. Effects of whistler mode hiss waves on the radiation belts structure during quiet times

    NASA Astrophysics Data System (ADS)

    Ripoll, J. F.; Santolik, O.; Reeves, G. D.; Kurth, W. S.; Denton, M.; Loridan, V.; Thaller, S. A.; Cunningham, G.; Kletzing, C.; Turner, D. L.; Henderson, M. G.; Ukhorskiy, S.; Drozdov, A.; Cervantes Villa, J. S.; Shprits, Y.

    2017-12-01

    We present dynamic Fokker-Planck simulations of the electron radiation belts and slot formation during the quiet days that can follow a storm. Simulations are made for all energies and L-shells between 2 and 6 in the view of recovering the observations of two particular events. Pitch angle diffusion is essential to energy structure of the belts and slot region. Pitch angle diffusion is computed from data-driven spatially and temporally-resolved whistler mode hiss wave and ambient plasma observations from the Van Allen Probes satellites. The simulations are performed either with a 3D formulation that uses pitch angle diffusion coefficients or with a simpler 1D Fokker-Planck equation based on losses computed from a lifetime. Validation is carried out globally against Magnetic Electron and Ion Spectrometer observations of the belts at all energy. Results are complemented with a sensitivity study involving different radial diffusion coefficients, electron lifetimes, and pitch angle diffusion coefficients. We discuss which models allow to recover the observed "S-shaped" energy-dependent inner boundary to the outer zone that results from the competition between diffusive radial transport and losses. Periods when the plasmasphere extends beyond L 5 favor long-lasting hiss losses from the outer belt. Through these simulations, we explain the full structure in energy and L-shell of the belts and the slot formation by hiss scattering during quiet storm recovery.

  1. Inviscid Flow Computations of the Shuttle Orbiter for Mach 10 and 15 and Angle of Attack 40 to 60 Degrees

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Sutton, Kenneth (Technical Monitor)

    2001-01-01

    This report documents the results of a computational study done to compute the inviscid longitudinal aerodynamic characteristics of the Space Shuttle Orbiter for Mach numbers 10 and 15 at angles of attack of 40, 50, 55, and 60 degrees. These computations were done to provide limited aerodynamic data in support of the Orbiter contingency abort task. The Orbiter had all the control surfaces in the undeflected position. The unstructured grid software FELISA was used for these computations with the equilibrium air option. Normal and axial force coefficients and pitching moment coefficients were computed. The hinge moment coefficients of the body flap and the inboard and outboard elevons were also computed. These results were compared with Orbiter Air Data Book (OADB) data and those computed using GASP. The comparison with the GASP results showed very good agreement in Cm and Ca at all the points. The computed axial force coefficients were smaller than those computed by GASP. There were noticeable differences between the present results and those in the OADB at angles of attack greater than 50 degrees.

  2. Aerodynamics for the Mars Phoenix Entry Capsule

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Desai, Prasun N.; Schoenenberger, Mark

    2008-01-01

    Pre-flight aerodynamics data for the Mars Phoenix entry capsule are presented. The aerodynamic coefficients were generated as a function of total angle-of-attack and either Knudsen number, velocity, or Mach number, depending on the flight regime. The database was constructed using continuum flowfield computations and data from the Mars Exploration Rover and Viking programs. Hypersonic and supersonic static coefficients were derived from Navier-Stokes solutions on a pre-flight design trajectory. High-altitude data (free-molecular and transitional regimes) and dynamic pitch damping characteristics were taken from Mars Exploration Rover analysis and testing. Transonic static coefficients from Viking wind tunnel tests were used for capsule aerodynamics under the parachute. Static instabilities were predicted at two points along the reference trajectory and were verified by reconstructed flight data. During the hypersonic instability, the capsule was predicted to trim at angles as high as 2.5 deg with an on-axis center-of-gravity. Trim angles were predicted for off-nominal pitching moment (4.2 deg peak) and a 5 mm off-axis center-ofgravity (4.8 deg peak). Finally, hypersonic static coefficient sensitivities to atmospheric density were predicted to be within uncertainty bounds.

  3. A Chain of Modeling Tools For Gas and Aqueous Phase Chemstry

    NASA Astrophysics Data System (ADS)

    Audiffren, N.; Djouad, R.; Sportisse, B.

    Atmospheric chemistry is characterized by the use of large set of chemical species and reactions. Handling with the set of data required for the definition of the model is a quite difficult task. We prsent in this short article a preprocessor for diphasic models (gas phase and aqueous phase in cloud droplets) named SPACK. The main interest of SPACK is the automatic generation of lumped species related to fast equilibria. We also developped a linear tangent model using the automatic differentiation tool named ODYSSEE in order to perform a sensitivity analysis of an atmospheric multi- phase mechanism based on RADM2 kinetic scheme.Local sensitivity coefficients are computed for two different scenarii. We focus in this study on the sensitivity of the ozone,NOx,HOx, system with respect to some aqueous phase reactions and we inves- tigate the influence of the reduction in the photolysis rates in the area below the cloud region.

  4. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  5. Parameter regionalization of a monthly water balance model for the conterminous United States

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-01-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  6. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  7. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-07-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  8. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  9. Update of aircraft profile data for the Integrated Noise Model computer program, vol. 3 : appendix B aircraft performance coefficients

    DOT National Transportation Integrated Search

    1992-03-01

    This report provides aircraft takeoff and landing profiles, : aircraft aerodynamic performance coefficients and engine : performance coefficients for the aircraft data base : (Database 9) in the Integrated Noise Model (INM) computer : program. Flight...

  10. The explicit computation of integration algorithms and first integrals for ordinary differential equations with polynomials coefficients using trees

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Grossman, Robert

    1992-01-01

    This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.

  11. Ab initio calculations of the lattice parameter and elastic stiffness coefficients of bcc Fe with solutes

    DOE PAGES

    Fellinger, Michael R.; Hector, Louis G.; Trinkle, Dallas R.

    2016-10-28

    Here, we present an efficient methodology for computing solute-induced changes in lattice parameters and elastic stiffness coefficients Cij of single crystals using density functional theory. We also introduce a solute strain misfit tensor that quantifies how solutes change lattice parameters due to the stress they induce in the host crystal. Solutes modify the elastic stiffness coefficients through volumetric changes and by altering chemical bonds. We compute each of these contributions to the elastic stiffness coefficients separately, and verify that their sum agrees with changes in the elastic stiffness coefficients computed directly using fully optimized supercells containing solutes. Computing the twomore » elastic stiffness contributions separately is more computationally efficient and provides more information on solute effects than the direct calculations. We compute the solute dependence of polycrystalline averaged shear and Young's moduli from the solute dependence of the single-crystal Cij. We then apply this methodology to substitutional Al, B, Cu, Mn, Si solutes and octahedral interstitial C and N solutes in bcc Fe. Comparison with experimental data indicates that our approach accurately predicts solute-induced changes in the lattice parameter and elastic coefficients. The computed data can be used to quantify solute-induced changes in mechanical properties such as strength and ductility, and can be incorporated into mesoscale models to improve their predictive capabilities.« less

  12. Computation of Transverse Injection Into Supersonic Crossflow With Various Injector Orifice Geometries

    NASA Technical Reports Server (NTRS)

    Foster, Lancert; Engblom, William A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of various injector geometries employed in transverse injection into a non-reacting Mach 1.2 flow. 3-D Reynolds-Averaged Navier Stokes (RANS) results are obtained for the various injector geometries using the Wind code with the Mentor s Shear Stress Transport turbulence model in both single and multi-species modes. Computed results for the injector mixing, penetration, and induced wall forces are presented. In the case of rectangular injectors, those longer in the direction of the freestream flow are predicted to generate the most mixing and penetration of the injector flow into the primary stream. These injectors are also predicted to provide the largest discharge coefficients and induced wall forces. Minor performance differences are indicated among diamond, circle, and square orifices. Grid sensitivity study results are presented which indicate consistent qualitative trends in the injector performance comparisons with increasing grid fineness.

  13. Heat Transfer Computations of Internal Duct Flows With Combined Hydraulic and Thermal Developing Length

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Towne, C. E.; Hippensteele, S. A.; Poinsatte, P. E.

    1997-01-01

    This study investigated the Navier-Stokes computations of the surface heat transfer coefficients of a transition duct flow. A transition duct from an axisymmetric cross section to a non-axisymmetric cross section, is usually used to connect the turbine exit to the nozzle. As the gas turbine inlet temperature increases, the transition duct is subjected to the high temperature at the gas turbine exit. The transition duct flow has combined development of hydraulic and thermal entry length. The design of the transition duct required accurate surface heat transfer coefficients. The Navier-Stokes computational method could be used to predict the surface heat transfer coefficients of a transition duct flow. The Proteus three-dimensional Navier-Stokes numerical computational code was used in this study. The code was first studied for the computations of the turbulent developing flow properties within a circular duct and a square duct. The code was then used to compute the turbulent flow properties of a transition duct flow. The computational results of the surface pressure, the skin friction factor, and the surface heat transfer coefficient were described and compared with their values obtained from theoretical analyses or experiments. The comparison showed that the Navier-Stokes computation could predict approximately the surface heat transfer coefficients of a transition duct flow.

  14. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  15. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2018-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  16. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE PAGES

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.; ...

    2017-08-24

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  17. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  18. Ridge: a computer program for calculating ridge regression estimates

    Treesearch

    Donald E. Hilt; Donald W. Seegrist

    1977-01-01

    Least-squares coefficients for multiple-regression models may be unstable when the independent variables are highly correlated. Ridge regression is a biased estimation procedure that produces stable estimates of the coefficients. Ridge regression is discussed, and a computer program for calculating the ridge coefficients is presented.

  19. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  20. Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Singhal, S. N.; Chamis, C. C.

    1996-01-01

    This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.

  1. Sensitivity study on durability variables of marine concrete structures

    NASA Astrophysics Data System (ADS)

    Zhou, Xin'gang; Li, Kefei

    2013-06-01

    In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.

  2. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  3. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  4. Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.

  5. High Aggregate Stability Coefficients Can Be Obtained for Unstable Traits.

    ERIC Educational Resources Information Center

    Day, H. D.; Marshall, Dave

    In the light of research by Epstein (1979) (which reported that error of measurement in the analysis of behavior stability may be reduced by examining the behavior of aggregate stability coefficients computed for measurements with known stability characteristics), this study examines stability coefficients for computer-generated data sets…

  6. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    PubMed

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena

    2015-06-01

    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A theoretical reassessment of microbial maintenance and implications for microbial ecology modeling.

    PubMed

    Wang, Gangsheng; Post, Wilfred M

    2012-09-01

    We attempted to reconcile three microbial maintenance models (Herbert, Pirt, and Compromise) through a theoretical reassessment. We provided a rigorous proof that the true growth yield coefficient (Y(G)) is the ratio of the specific maintenance rate (a in Herbert) to the maintenance coefficient (m in Pirt). Other findings from this study include: (1) the Compromise model is identical to the Herbert for computing microbial growth and substrate consumption, but it expresses the dependence of maintenance on both microbial biomass and substrate; (2) the maximum specific growth rate in the Herbert (μ(max,H)) is higher than those in the other two models (μ(max,P) and μ(max,C)), and the difference is the physiological maintenance factor (m(q) = a); and (3) the overall maintenance coefficient (m(T)) is more sensitive to m(q) than to the specific growth rate (μ(G)) and Y(G). Our critical reassessment of microbial maintenance provides a new approach for quantifying some important components in soil microbial ecology models. © This article is a US government work and is in the public domain in the USA.

  8. Efficacy of guided spiral drawing in the classification of Parkinson's Disease.

    PubMed

    Zham, Poonam; Arjunan, Sridhar; Raghav, Sanjay; Kumar, Dinesh Kant

    2017-10-11

    Change of handwriting can be an early marker for severity of Parkinson's disease but suffers from poor sensitivity and specificity due to inter-subject variations. This study has investigated the group-difference in the dynamic features during sketching of spiral between PD and control subjects with the aim of developing an accurate method for diagnosing PD patients. Dynamic handwriting features were computed for 206 specimens collected from 62 Subjects (31 Parkinson's and 31 Controls). These were analyzed based on the severity of the disease to determine group-difference. Spearman rank correlation coefficient was computed to evaluate the strength of association for the different features. Maximum area under ROC curve (AUC) using the dynamic features during different writing and spiral sketching tasks were in the range of 67 to 79 %. However, when angular features ( and ) and count of direction inversion during sketching of the spiral were used, AUC improved to 93.3%. Spearman correlation coefficient was highest for and . The angular features and count of direction inversion which can be obtained in real-time while sketching the Archimedean guided spiral on a digital tablet can be used for differentiating between Parkinson's and healthy cohort.

  9. Impact of improved momentum transfer coefficients on the dynamics and thermodynamics of the north Indian Ocean

    NASA Astrophysics Data System (ADS)

    Parekh, Anant; Gnanaseelan, C.; Jayakumar, A.

    2011-01-01

    Long time series of in situ observations from the north Indian Ocean are used to compute the momentum transfer coefficients over the north Indian Ocean. The transfer coefficients behave nonlinearly for low winds (<4 m/s), when most of the known empirical relations assume linear relations. Impact of momentum transfer coefficients on the upper ocean parameters is studied using an ocean general circulation model. The model experiments revealed that the Arabian Sea and Equatorial Indian Ocean are more sensitive to the momentum transfer coefficients than the Bay of Bengal and south Indian Ocean. The impact of momentum transfer coefficients on sea surface temperature is up to 0.3°C-0.4°C, on mixed layer depth is up to 10 m, and on thermocline depth is up to 15 m. Furthermore, the impact on the zonal current is maximum over the equatorial Indian Ocean (i.e., about 0.12 m/s in May and 0.15 m/s in October; both May and October are the period of Wyrtki jets and the difference in current has potential impact on the seasonal mass transport). The Sverdrup transport has maximum impact in the Bay of Bengal (3 to 4 Sv in August), whereas the Ekman transport has maximum impact in the Arabian Sea (4 Sv during May to July). These highlight the potential impact of accurate momentum forcing on the results from current ocean models.

  10. Studies on the ionospheric-thermospheric coupling mechanisms using SLR

    NASA Astrophysics Data System (ADS)

    Panzetta, Francesca; Erdogan, Eren; Bloßfeld, Mathis; Schmidt, Michael

    2016-04-01

    Several Low Earth Orbiters (LEOs) have been used by different research groups to model the thermospheric neutral density distribution at various altitudes performing Precise Orbit Determination (POD) in combination with satellite accelerometry. This approach is, in principle, based on satellite drag analysis, driven by the fact that the drag force is one of the major perturbing forces acting on LEOs. The satellite drag itself is physically related to the thermospheric density. The present contribution investigates the possibility to compute the thermospheric density from Satellite Laser Ranging (SLR) observations. SLR is commonly used to compute very accurate satellite orbits. As a prerequisite, a very high precise modelling of gravitational and non-gravitational accelerations is necessary. For this investigation, a sensitivity study of SLR observations to thermospheric density variations is performed using the DGFI Orbit and Geodetic parameter estimation Software (DOGS). SLR data from satellites at altitudes lower than 500 km are processed adopting different thermospheric models. The drag coefficients which describe the interaction of the satellite surfaces with the atmosphere are analytically computed in order to obtain scaling factors purely related to the thermospheric density. The results are reported and discussed in terms of estimates of scaling coefficients of the thermospheric density. Besides, further extensions and improvements in thermospheric density modelling obtained by combining a physics-based approach with ionospheric observations are investigated. For this purpose, the coupling mechanisms between the thermosphere and ionosphere are studied.

  11. Comparison of Regression Methods to Compute Atmospheric Pressure and Earth Tidal Coefficients in Water Level Associated with Wenchuan Earthquake of 12 May 2008

    NASA Astrophysics Data System (ADS)

    He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang

    2016-07-01

    The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.

  12. Interactive, computer-assisted tracking of speckle trajectories in fluorescence microscopy: application to actin polymerization and membrane fusion.

    PubMed

    Smith, Matthew B; Karatekin, Erdem; Gohlke, Andrea; Mizuno, Hiroaki; Watanabe, Naoki; Vavylonis, Dimitrios

    2011-10-05

    Analysis of particle trajectories in images obtained by fluorescence microscopy reveals biophysical properties such as diffusion coefficient or rates of association and dissociation. Particle tracking and lifetime measurement is often limited by noise, large mobilities, image inhomogeneities, and path crossings. We present Speckle TrackerJ, a tool that addresses some of these challenges using computer-assisted techniques for finding positions and tracking particles in different situations. A dynamic user interface assists in the creation, editing, and refining of particle tracks. The following are results from application of this program: 1), Tracking single molecule diffusion in simulated images. The shape of the diffusing marker on the image changes from speckle to cloud, depending on the relationship of the diffusion coefficient to the camera exposure time. We use these images to illustrate the range of diffusion coefficients that can be measured. 2), We used the program to measure the diffusion coefficient of capping proteins in the lamellipodium. We found values ∼0.5 μm(2)/s, suggesting capping protein association with protein complexes or the membrane. 3), We demonstrate efficient measuring of appearance and disappearance of EGFP-actin speckles within the lamellipodium of motile cells that indicate actin monomer incorporation into the actin filament network. 4), We marked appearance and disappearance events of fluorescently labeled vesicles to supported lipid bilayers and tracked single lipids from the fused vesicle on the bilayer. This is the first time, to our knowledge, that vesicle fusion has been detected with single molecule sensitivity and the program allowed us to perform a quantitative analysis. 5), By discriminating between undocking and fusion events, dwell times for vesicle fusion after vesicle docking to membranes can be measured. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Sensitivity of Support Vector Machine Predictions of Passive Microwave Brightness Temperature Over Snow-covered Terrain in High Mountain Asia

    NASA Astrophysics Data System (ADS)

    Ahmad, J. A.; Forman, B. A.

    2017-12-01

    High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.

  14. Molecular dynamics studies of transport properties and equation of state of supercritical fluids

    NASA Astrophysics Data System (ADS)

    Nwobi, Obika C.

    Many chemical propulsion systems operate with one or more of the reactants above the critical point in order to enhance their performance. Most of the computational fluid dynamics (CFD) methods used to predict these flows require accurate information on the transport properties and equation of state at these supercritical conditions. This work involves the determination of transport coefficients and equation of state of supercritical fluids by equilibrium molecular dynamics (MD) simulations on parallel computers using the Green-Kubo formulae and the virial equation of state, respectively. MD involves the solution of equations of motion of a system of molecules that interact with each other through an intermolecular potential. Provided that an accurate potential can be found for the system of interest, MD can be used regardless of the phase and thermodynamic conditions of the substances involved. The MD program uses the effective Lennard-Jones potential, with system sizes of 1000-1200 molecules and, simulations of 2,000,000 time-steps for computing transport coefficients and 200,000 time-steps for pressures. The computer code also uses linked cell lists for efficient sorting of molecules, periodic boundary conditions, and a modified velocity Verlet algorithm for particle displacement. Particle decomposition is used for distributing the molecules to different processors of a parallel computer. Simulations have been carried out on pure argon, nitrogen, oxygen and ethylene at various supercritical conditions, with self-diffusion coefficients, shear viscosity coefficients, thermal conductivity coefficients and pressures computed for most of the conditions. Results compare well with experimental and the National Institute of Standards and Technology (NIST) values. The results show that the number of molecules and the potential cut-off radius have no significant effect on the computed coefficients, while long-time integration is necessary for accurate determination of the coefficients.

  15. Analysis of a hydraulic a scaled asymmetric labyrinth weir with Ansys-Fluent

    NASA Astrophysics Data System (ADS)

    Otálora Carmona, Andrés Humberto; Santos Granados, Germán Ricardo

    2017-04-01

    This document presents the three dimensional computational modeling of a labyrinth weir, using the version 17.0 of the Computational Fluid Dynamics (CFD) software ANSYS - FLUENT. The computational characteristics of the model such as the geometry consideration, the mesh sensitivity, the numerical scheme, and the turbulence modeling parameters. The volume fraction of the water mixture - air, the velocity profile, the jet trajectory, the discharge coefficient and the velocity field are analyzed. With the purpose of evaluating the hydraulic behavior of the labyrinth weir of the Naveta's hydroelectric, in Apulo - Cundinamarca, was development a 1:21 scale model of the original structure, which was tested in the laboratory of the hydraulic studies in the Escuela Colombiana de Ingeniería Julio Garavito. The scale model of the structure was initially developed to determine the variability of the discharge coefficient with respect to the flow rate and their influence on the water level. It was elaborate because the original weir (labyrinth weir with not symmetrical rectangular section), did not have the capacity to work with the design flow of 31 m3/s, because over 15 m3/s, there were overflows in the adduction channel. This variation of efficiency was due to the thickening of the lateral walls by structural requirements. During the physical modeling doing by Rodríguez, H. and Matamoros H. (2015) in the test channel, it was found that, with the increase in the width of the side walls, the discharge coefficient is reduced an average by 34%, generating an increase of the water level by 0.26 m above the structure. This document aims to develop a splicing methodology between the physical models of a labyrinth weir and numerical modeling, using concepts of computational fluid dynamics and finite volume theories. For this, was carried out a detailed analysis of the variations in the different directions of the main hydraulic variables involved in the behavior, such as, the components of the velocity and the distribution of pressures, For the numerical development, we worked with ANSYS - FLUENT software modeling version 17.0. Initially, a digital model of a conventional triangular weir with a vertical angle of 102° was developed in order to find the most appropriate numerical scheme and conditions. The numerical results were compared with conventional theories, evaluating the path and discharge coefficient. Subsequently, one of the five cycles that compose the labyrinth weir was simulated, evaluating the behavior of the discharge coefficient, the water level, the streamline and the velocity field, with the purpose of understanding the hydraulic variables that are related in these geometries. According to the previous results, the numerical modeling of labyrinth weir was performed, comparing the obtained results with the data of the physical scale model, analyzing the variation of the discharge coefficient, the streamline, velocity field, pressure distribution and shear stress. Finally, based on the lessons learned from physical and numerical modeling, a methodological guide was created for any user with a computational and hydraulic fluid mechanics knowledge to develop a good practice of a computational and physical modeling.

  16. Clinical and microperimetric predictors of reading speed in low vision patients: a structural equation modeling approach.

    PubMed

    Giacomelli, Giovanni; Virgili, Gianni; Giansanti, Fabrizio; Sato, Giovanni; Cappello, Ezio; Cruciani, Filippo; Varano, Monica; Menchini, Ugo

    2013-06-27

    To investigate the simultaneous association of several psychophysical measures with reading ability in patients with mild and moderate low vision attending rehabilitation services. Standard measurements of reading ability (Minnesota Reading [MNREAD] charts), visual acuity (Early Treatment of Diabetic Retinopathy Study [ETDRS] charts), contrast sensitivity (Pelli-Robson charts), reading contrast threshold (Reading Explorer [REX] charts), retinal sensitivity, and fixation stability and localization (Micro Perimeter 1 [MP1] fundus perimetry) were obtained in 160 low vision patients with better eye visual acuity ranging from 0.3 to 1.0 logarithm of the minimum angle of resolution and affected by either age-related macular degeneration or diabetic retinopathy. All variables were moderately associated with reading performance measures (MNREAD reading speed and reading acuity and REX reading contrast threshold), as well as among each other. In a structural equation model, REX reading contrast threshold was highly associated with MNREAD reading speed (standardized coefficient, 0.63) and moderately associated with reading acuity (standardized coefficient, -0.30). REX test also mediated the effects of Pelli-Robson contrast sensitivity (standardized coefficient, 0.44), MP1 fixation eccentricity (standardized coefficient, -0.19), and the mean retinal sensitivity (standardized coefficient, 0.23) on reading performance. The MP1 fixation stability was associated with both MNREAD reading acuity (standardized coefficient, -0.24) and MNREAD reading speed (standardized coefficient, 0.23), while ETDRS visual acuity only affected reading acuity (standardized coefficient, 0.44). Fixation instability and contrast sensitivity loss are key factors limiting reading performance of patients with mild or moderate low vision. REX charts directly assess the impact of text contrast on letter recognition and text navigation and may be a useful aid in reading rehabilitation.

  17. Computer-aided diagnostic system for diffuse liver diseases with ultrasonography by neural networks

    NASA Astrophysics Data System (ADS)

    Ogawa, K.; Fukushima, M.; Kubota, K.; Hisa, N.

    1998-12-01

    The aim of the study is to establish a computer-aided diagnostic system for diffuse liver diseases such as chronic active hepatitis (CAH) and liver cirrhosis (LC). The authors introduced an artificial neural network in the classification of these diseases. In this system the neural network was trained by feature parameters extracted from B-mode ultrasonic images of normal liver (NL), CAH and LC. For input data the authors used six parameters calculated by a region of interest (ROI) and a parameter calculated by five ROIs in each image. They were variance of pixel values, coefficient of variation, annular Fourier power spectrum, longitudinal Fourier power spectrum which were calculated for the ROI, and variation of the means of the five ROIs. In addition, the authors used two more parameters calculated from a co-occurrence matrix of pixel values in the ROI. The results showed that the neural network classifier was 83.8% in sensitivity for LC, 90.0% in sensitivity for CAH and 93.6% in specificity, and the system was considered to be helpful for clinical and educational use.

  18. Microgravity Geyser and Flow Field Prediction

    NASA Technical Reports Server (NTRS)

    Hochstein, J. I.; Marchetta, J. G.; Thornton, R. J.

    2006-01-01

    Modeling and prediction of flow fields and geyser formation in microgravity cryogenic propellant tanks was investigated. A computational simulation was used to reproduce the test matrix of experimental results performed by other investigators, as well as to model the flows in a larger tank. An underprediction of geyser height by the model led to a sensitivity study to determine if variations in surface tension coefficient, contact angle, or jet pipe turbulence significantly influence the simulations. It was determined that computational geyser height is not sensitive to slight variations in any of these items. An existing empirical correlation based on dimensionless parameters was re-examined in an effort to improve the accuracy of geyser prediction. This resulted in the proposal for a re-formulation of two dimensionless parameters used in the correlation; the non-dimensional geyser height and the Bond number. It was concluded that the new non-dimensional geyser height shows little promise. Although further data will be required to make a definite judgement, the reformulation of the Bond number provided correlations that are more accurate and appear to be more general than the previously established correlation.

  19. Testing Lorentz and C P T invariance with ultracold neutrons

    NASA Astrophysics Data System (ADS)

    Martín-Ruiz, A.; Escobar, C. A.

    2018-05-01

    In this paper we investigate, within the standard model extension framework, the influence of Lorentz- and C P T -violating terms on gravitational quantum states of ultracold neutrons. Using a semiclassical wave packet, we derive the effective nonrelativistic Hamiltonian which describes the neutrons vertical motion by averaging the contributions from the perpendicular coordinates to the free falling axis. We compute the physical implications of the Lorentz- and C P T -violating terms on the spectra. The comparison of our results with those obtained in the GRANIT experiment leads to an upper bound for the symmetries-violation cμν n coefficients. We find that ultracold neutrons are sensitive to the ain and ein coefficients, which thus far are unbounded by experiments in the neutron sector. We propose two additional problems involving ultracold neutrons which could be relevant for improving our current bounds; namely, gravity-resonance spectroscopy and neutron whispering gallery wave.

  20. Application of dynamic slip wall modeling to a turbine nozzle guide vane

    NASA Astrophysics Data System (ADS)

    Bose, Sanjeeb; Talnikar, Chaitanya; Blonigan, Patrick; Wang, Qiqi

    2015-11-01

    Resolution of near-wall turbulent structures is computational prohibitive necessitating the need for wall-modeled large-eddy simulation approaches. Standard wall models are often based on assumptions of equilibrium boundary layers, which do not necessarily account for the dissimilarity of the momentum and thermal boundary layers. We investigate the use of the dynamic slip wall boundary condition (Bose and Moin, 2014) for the prediction of surface heat transfer on a turbine nozzle guide vane (Arts and de Rouvroit, 1992). The heat transfer coefficient is well predicted by the slip wall model, including capturing the transition to turbulence. The sensitivity of the heat transfer coefficient to the incident turbulence intensity will additionally be discussed. Lastly, the behavior of the thermal and momentum slip lengths will be contrasted between regions where the strong Reynolds analogy is invalid (near transition on the suction side) and an isothermal, zero pressure gradient flat plate boundary layer (Wu and Moin, 2010).

  1. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-10-16

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  2. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  3. Unravelling the hydrophobicity of urea in water using thermodiffusion: implications for protein denaturation.

    PubMed

    Niether, Doreen; Di Lecce, Silvia; Bresme, Fernando; Wiegand, Simone

    2018-01-03

    Urea is widely used as a protein denaturant in aqueous solutions. Experimental and computer simulation studies have shown that it dissolves in water almost ideally at high concentrations, introducing little disruption in the water hydrogen bonded structure. However, at concentrations of the order of 5 M or higher, urea induces denaturation in a wide range of proteins. The origin of this behaviour is not completely understood, but it is believed to stem from a balance between urea-protein and urea-water interactions, with urea becoming possibly hydrophobic at a specific concentration range. The small changes observed in the water structure make it difficult to connect the denaturation effects to the solvation properties. Here we show that the exquisite sensitivity of thermodiffusion to solute-water interactions allows the identification of the onset of hydrophobicity of urea-water mixtures. The hydrophobic behaviour is reflected in a sign reversal of the temperature dependent slope of the Soret coefficient, which is observed, both in experiments and non-equilibrium computer simulations at ∼5 M concentration of urea in water. This concentration regime corresponds to the one where abrupt changes in the denaturation of proteins are commonly observed. We show that the onset of hydrophobicity is intrinsically connected to the urea-water interactions. Our results allow us to identify correlations between the Soret coefficient and the partition coefficient, log P, hence establishing the thermodiffusion technique as a powerful approach to study hydrophobicity.

  4. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  5. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  6. Impact of interpatient variability on organ dose estimates according to MIRD schema: Uncertainty and variance-based sensitivity analysis.

    PubMed

    Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia

    2018-05-17

    Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.

  7. Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images With Diabetic Macular Edema Using Neutrosophic Sets and Graph Algorithms.

    PubMed

    Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K

    2018-05-01

    This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.

  8. Computer modeling of obesity links theoretical energetic measures with experimental measures of fuel use for lean and obese men.

    PubMed

    Rossow, Heidi A; Calvert, C Chris

    2014-10-01

    The goal of this research was to use a computational model of human metabolism to predict energy metabolism for lean and obese men. The model is composed of 6 state variables representing amino acids, muscle protein, visceral protein, glucose, triglycerides, and fatty acids (FAs). Differential equations represent carbohydrate, amino acid, and FA uptake and output by tissues based on ATP creation and use for both lean and obese men. Model parameterization is based on data from previous studies. Results from sensitivity analyses indicate that model predictions of resting energy expenditure (REE) and respiratory quotient (RQ) are dependent on FA and glucose oxidation rates with the highest sensitivity coefficients (0.6, 0.8 and 0.43, 0.15, respectively, for lean and obese models). Metabolizable energy (ME) is influenced by ingested energy intake with a sensitivity coefficient of 0.98, and a phosphate-to-oxygen ratio by FA oxidation rate and amino acid oxidation rate (0.32, 0.24 and 0.55, 0.65 for lean and obese models, respectively). Simulations of previously published studies showed that the model is able to predict ME ranging from 6.6 to 9.3 with 0% differences between published and model values, and RQ ranging from 0.79 to 0.86 with 1% differences between published and model values. REEs >7 MJ/d are predicted with 6% differences between published and model values. Glucose oxidation increases by ∼0.59 mol/d, RQ increases by 0.03, REE increases by 2 MJ/d, and heat production increases by 1.8 MJ/d in the obese model compared with lean model simulations. Increased FA oxidation results in higher changes in RQ and lower relative changes in REE. These results suggest that because fat mass is directly related to REE and rate of FA oxidation, body fat content could be used as a predictor of RQ. © 2014 American Society for Nutrition.

  9. A Novel HMM Distributed Classifier for the Detection of Gait Phases by Means of a Wearable Inertial Sensor Network

    PubMed Central

    Taborri, Juri; Rossi, Stefano; Palermo, Eduardo; Patanè, Fabrizio; Cappa, Paolo

    2014-01-01

    In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM) applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs) during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98) for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95) when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints. PMID:25184488

  10. Sensitivity analysis of a multilayer, finite-difference model of the Southeastern Coastal Plain regional aquifer system; Mississippi, Alabama, Georgia, and South Carolina

    USGS Publications Warehouse

    Pernik, Meribeth

    1987-01-01

    The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)

  11. Inner magnetospheric electron temperature and spacecraft potential estimated from concurrent Polar upper hybrid frequency and relative potential measurements

    NASA Astrophysics Data System (ADS)

    Boardsen, S. A.; Adrian, M. L.; Pfaff, R.; Menietti, J. D.

    2014-10-01

    Direct measurement of low < 1 eV electron temperature is difficult to make in the Earth's inner magnetosphere for electron densities (Ne) < 3 × 102 cm-3. We compute these quantities by solving current balance equations in low-density regions. Concurrent measurements from the Polar spacecraft of the relative potential (VS - VP), between the spacecraft body and the electric field probe, and the electron density (Ne), derived from upper hybrid frequency (fUHR), were used in the current balance equations to solve for the electron temperature (Te), Vs, and Vp. Where VP is the probe potential and VS is the spacecraft potential relative to the nearby plasma. The assumption that the bulk plasma electrons are Maxwellian is used in the computations. Our data set covered 1.5 years of measurements when fUHR was detectable (L < 10). The following "averaged" Te versus L relation for 3 < L < 5 was obtained: Te = 0.58 + 0.49 (L - 3) eV. This expression is in reasonable agreement with extrapolations of ionospheric Te measurements by Akebono at lower altitudes. However, the solution is sensitive to the photoemission coefficients, substituting those of Scudder et al. (2000) with those of Escoubet et al. (1997), the Te curve shifted upward by ~1 eV. Also, the solution is sensitive to measurement error of VS - VP, applying a voltage shift of ±0.1 and ±0.2 V to VS - VP, the relative median error for our data set was computed to be 0.27 and 1.04, respectively. We believe that our Te values computed outside the plasmasphere are unrealistically low. We conclude that this method shows promise inside the plasmasphere but should be used with caution. We also quantified the Ne versus VS - VP relationship. The running median Ne versus VS - VP curve shows no significant variation over the 1.5 year period of the data set, suggesting that the photoemission coefficients did not change significantly over this time span. The Scudder et al. (2000) Ne model, based on only one Polar orbit, is in reasonable agreement (within a factor of 2) with our results.

  12. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  13. Exhibit D modular design attitude control system study

    NASA Technical Reports Server (NTRS)

    Chichester, F.

    1984-01-01

    A dynamically equivalent four body approximation of the NASTRAN finite element model supplied for hybrid deployable truss to support the digital computer simulation of the ten body model of the flexible space platform that incorporates the four body truss model were investigated. Coefficients for sensitivity of state variables of the linearized model of the three axes rotational dynamics of the prototype flexible spacecraft were generated with respect to the model's parameters. Software changes required to accommodate addition of another rigid body to the five body model of the rotational dynamics of the prototype flexible spacecraft were evaluated.

  14. The Spaeth/Richman contrast sensitivity test (SPARCS): design, reproducibility and ability to identify patients with glaucoma.

    PubMed

    Richman, Jesse; Zangalli, Camila; Lu, Lan; Wizov, Sheryl S; Spaeth, Eric; Spaeth, George L

    2015-01-01

    (1) To determine the ability of a novel, internet-based contrast sensitivity test titled the Spaeth/Richman Contrast Sensitivity Test (SPARCS) to identify patients with glaucoma. (2) To determine the test-retest reliability of SPARCS. A prospective, cross-sectional study of patients with glaucoma and controls was performed. Subjects were assessed by SPARCS and the Pelli-Robson chart. Reliability of each test was assessed by the intraclass correlation coefficient and the coefficient of repeatability. Sensitivity and specificity for identifying glaucoma was also evaluated. The intraclass correlation coefficient for SPARCS was 0.97 and 0.98 for Pelli-Robson. The coefficient of repeatability for SPARCS was ±6.7% and ±6.4% for Pelli-Robson. SPARCS identified patients with glaucoma with 79% sensitivity and 93% specificity. SPARCS has high test-retest reliability. It is easily accessible via the internet and identifies patients with glaucoma well. NCT01300949. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.

    2015-09-01

    A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  16. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  17. Hyperfine excitation of C2H in collisions with ortho- and para-H2

    NASA Astrophysics Data System (ADS)

    Dagdigian, Paul J.

    2018-06-01

    Accurate estimation of the abundance of the ethynyl (C2H) radical requires accurate radiative and collisional rate coefficients. Hyperfine-resolved rate coefficients for (de-)excitation of C2H in collisions with ortho- and para-H2 are presented in this work. These rate coefficients were computed in time-independent close-coupling quantum scattering calculations that employed a potential energy surface recently computed at the coupled-clusters level of theory that describes the interaction of C2H with H2. Rate coefficients for temperatures from 10 to 300 K were computed for all transitions among the first 40 hyperfine energy levels of C2H in collisions with ortho- and para-H2. These rate coefficients were employed in simple radiative transfer calculations to simulate the excitation of C2H in typical molecular clouds.

  18. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  19. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  20. Modified free volume theory of self-diffusion and molecular theory of shear viscosity of liquid carbon dioxide.

    PubMed

    Nasrabad, Afshin Eskandari; Laghaei, Rozita; Eu, Byung Chan

    2005-04-28

    In previous work on the density fluctuation theory of transport coefficients of liquids, it was necessary to use empirical self-diffusion coefficients to calculate the transport coefficients (e.g., shear viscosity of carbon dioxide). In this work, the necessity of empirical input of the self-diffusion coefficients in the calculation of shear viscosity is removed, and the theory is thus made a self-contained molecular theory of transport coefficients of liquids, albeit it contains an empirical parameter in the subcritical regime. The required self-diffusion coefficients of liquid carbon dioxide are calculated by using the modified free volume theory for which the generic van der Waals equation of state and Monte Carlo simulations are combined to accurately compute the mean free volume by means of statistical mechanics. They have been computed as a function of density along four different isotherms and isobars. A Lennard-Jones site-site interaction potential was used to model the molecular carbon dioxide interaction. The density and temperature dependence of the theoretical self-diffusion coefficients are shown to be in excellent agreement with experimental data when the minimum critical free volume is identified with the molecular volume. The self-diffusion coefficients thus computed are then used to compute the density and temperature dependence of the shear viscosity of liquid carbon dioxide by employing the density fluctuation theory formula for shear viscosity as reported in an earlier paper (J. Chem. Phys. 2000, 112, 7118). The theoretical shear viscosity is shown to be robust and yields excellent density and temperature dependence for carbon dioxide. The pair correlation function appearing in the theory has been computed by Monte Carlo simulations.

  1. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  2. Zonal and tesseral harmonic coefficients for the geopotential function, from zero to 18th order

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    Zonal and tesseral harmonic coefficients for the geopotential function are usually tabulated in normalized form to provide immediate information as to the relative significance of the coefficients in the gravity model. The normalized form of the geopotential coefficients cannot be used for computational purposes unless the gravity model has been modified to receive them. This modification is usually not done because the absolute or unnormalized form of the coefficients can be obtained from the simple mathematical relationship that relates the two forms. This computation can be quite tedious for hand calculation, especially for the higher order terms, and can be costly in terms of storage and execution time for machine computation. In this report, zonal and tesseral harmonic coefficients for the geopotential function are tabulated in absolute or unnormalized form. The report is designed to be used as a ready reference for both hand and machine calculation to save the user time and effort.

  3. Assessing Airflow Sensitivity to Healthy and Diseased Lung Conditions in a Computational Fluid Dynamics Model Validated In Vitro.

    PubMed

    Sul, Bora; Oppito, Zachary; Jayasekera, Shehan; Vanger, Brian; Zeller, Amy; Morris, Michael; Ruppert, Kai; Altes, Talissa; Rakesh, Vineet; Day, Steven; Robinson, Risa; Reifman, Jaques; Wallqvist, Anders

    2018-05-01

    Computational models are useful for understanding respiratory physiology. Crucial to such models are the boundary conditions specifying the flow conditions at truncated airway branches (terminal flow rates). However, most studies make assumptions about these values, which are difficult to obtain in vivo. We developed a computational fluid dynamics (CFD) model of airflows for steady expiration to investigate how terminal flows affect airflow patterns in respiratory airways. First, we measured in vitro airflow patterns in a physical airway model, using particle image velocimetry (PIV). The measured and computed airflow patterns agreed well, validating our CFD model. Next, we used the lobar flow fractions from a healthy or chronic obstructive pulmonary disease (COPD) subject as constraints to derive different terminal flow rates (i.e., three healthy and one COPD) and computed the corresponding airflow patterns in the same geometry. To assess airflow sensitivity to the boundary conditions, we used the correlation coefficient of the shape similarity (R) and the root-mean-square of the velocity magnitude difference (Drms) between two velocity contours. Airflow patterns in the central airways were similar across healthy conditions (minimum R, 0.80) despite variations in terminal flow rates but markedly different for COPD (minimum R, 0.26; maximum Drms, ten times that of healthy cases). In contrast, those in the upper airway were similar for all cases. Our findings quantify how variability in terminal and lobar flows contributes to airflow patterns in respiratory airways. They highlight the importance of using lobar flow fractions to examine physiologically relevant airflow characteristics.

  4. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  5. Simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1986-01-01

    A computer simulation was performed for a Geopotential Research Mission (GRM) to enable study of the gravitational sensitivity of the range/rate measurement between two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulation, identified as SGRM 8511, was conducted with two satellites in near circular, frozen orbits at 160 km altitude and separated by 300 km. High precision numerical integration of the polar orbits was used with a gravitational field complete to degree and order 180 coefficients and to degree 300 in orders 0 to 10. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The characteristics of the simulation and the nature of the results are described.

  6. Unstructured Grid Euler Method Assessment for Longitudinal and Lateral/Directional Aerodynamic Performance Analysis of the HSR Technology Concept Airplane at Supersonic Cruise Speed

    NASA Technical Reports Server (NTRS)

    Ghaffari, Farhad

    1999-01-01

    Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.

  7. On-Track Testing as a Validation Method of Computational Fluid Dynamic Simulations of a Formula SAE Vehicle

    NASA Astrophysics Data System (ADS)

    Weingart, Robert

    This thesis is about the validation of a computational fluid dynamics simulation of a ground vehicle by means of a low-budget coast-down test. The vehicle is built to the standards of the 2014 Formula SAE rules. It is equipped with large wings in the front and rear of the car; the vertical loads on the tires are measured by specifically calibrated shock potentiometers. The coast-down test was performed on a runway of a local airport and is used to determine vehicle specific coefficients such as drag, downforce, aerodynamic balance, and rolling resistance for different aerodynamic setups. The test results are then compared to the respective simulated results. The drag deviates about 5% from the simulated to the measured results. The downforce numbers show a deviation up to 18% respectively. Moreover, a sensitivity analysis of inlet velocities, ride heights, and pitch angles was performed with the help of the computational simulation.

  8. Determination of the optical absorption spectra of thin layers from their photoacoustic spectra

    NASA Astrophysics Data System (ADS)

    Bychto, Leszek; Maliński, Mirosław; Patryn, Aleksy; Tivanov, Mikhail; Gremenok, Valery

    2018-05-01

    This paper presents a new method for computations of the optical absorption coefficient spectra from the normalized photoacoustic amplitude spectra of thin semiconductor samples deposited on the optically transparent and thermally thick substrates. This method was tested on CuIn(Te0.7Se0.3)2 thin films. From the normalized photoacoustic amplitude spectra, the optical absorption coefficient spectra were computed with the new formula as also with the numerical iterative method. From these spectra, the value of the energy gap of the thin film material and the type of the optical transitions were determined. From the experimental optical transmission spectra, the optical absorption coefficient spectra were computed too, and compared with the optical absorption coefficient spectra obtained from photoacoustic spectra.

  9. Broadband computation of the scattering coefficients of infinite arbitrary cylinders.

    PubMed

    Blanchard, Cédric; Guizal, Brahim; Felbacq, Didier

    2012-07-01

    We employ a time-domain method to compute the near field on a contour enclosing infinitely long cylinders of arbitrary cross section and constitution. We therefore recover the cylindrical Hankel coefficients of the expansion of the field outside the circumscribed circle of the structure. The recovered coefficients enable the wideband analysis of complex systems, e.g., the determination of the radar cross section becomes straightforward. The prescription for constructing such a numerical tool is provided in great detail. The method is validated by computing the scattering coefficients for a homogeneous circular cylinder illuminated by a plane wave, a problem for which an analytical solution exists. Finally, some radiation properties of an optical antenna are examined by employing the proposed technique.

  10. Computation of turbulence and dispersion of cork in the NETL riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiradilok, Veeraya; Gidaspow, Dimitri; Breault, R.W.

    The knowledge of dispersion coefficients is essential for reliable design of gasifiers. However, a literature review had shown that dispersion coefficients in fluidized beds differ by more than five orders of magnitude. This study presents a comparison of the computed axial solids dispersion coefficients for cork particles to the NETL riser cork data. The turbulence properties, the Reynolds stresses, the granular temperature spectra and the radial and axial gas and solids dispersion coefficients are computed. The standard kinetic theory model described in Gidaspow’s 1994 book, Multiphase Flow and Fluidization, Academic Press and the IIT and Fluent codes were used tomore » compute the measured axial solids volume fraction profiles for flow of cork particles in the NETL riser. The Johnson–Jackson boundary conditions were used. Standard drag correlations were used. This study shows that the computed solids volume fractions for the low flux flow are within the experimental error of those measured, using a two-dimensional model. At higher solids fluxes the simulated solids volume fractions are close to the experimental measurements, but deviate significantly at the top of the riser. This disagreement is due to use of simplified geometry in the two-dimensional simulation. There is a good agreement between the experiment and the three-dimensional simulation for a high flux condition. This study concludes that the axial and radial gas and solids dispersion coefficients in risers operating in the turbulent flow regime can be computed using a multiphase computational fluid dynamics model.« less

  11. Tracking a Heavy Pollution Process in Beijing in Winter 2016 Using GRAPES-CUACE Adjoint Model

    NASA Astrophysics Data System (ADS)

    Wang, C.; An, X.; Zhai, S.; Zhaobin, S.

    2017-12-01

    By using the GRAPES-CUACE (Global-Regional Assimilation and Prediction System coupled with the CMA Unified Atmospheric Chemistry Environmental Forecasting System) adjoint model, the adjoint sensitivity of the heavy pollution process in the winter of 2016 in Beijing is traced, and the key emission sources and periods that impacted this heavy pollution process most seriously are analyzed. The research findings suggest that the peak concentration of PM2.5 has a rapid response to the local emission, and the local hourly sensitivity coefficient, which is 9.31 μg m-3, reaches the peak at the moment 1h before the objective time. From the cumulative sensitivity coefficient, the local emission plays the main theme within 20h before the objective time. The contribution of the surrounding emission is the accumulation of the neighboring sources of Beijing, Tianjin, Hebei and Shanxi, whose cumulative contribution ratios are 34.2%, 3.0%, 49.4% and 13.4% respectively within 72h before the objective time. From the hourly sensitivity coefficient, the major contribution period of Tianjin source is 1-26h before the objective time and its hourly contribution peak value is 0.59 μg m-3, appearing at the moment 4h before the objective time. The main contribution periods of Hebei and Shanxi emission sources are respectively 1-54h and 14-53h before the objective time and their hourly sensitivity coefficients both show periodic fluctuations. The Hebei source shows three peaks of sensitivity coefficients, which are 3.45 μg m-3, 4.27 μg m-3 and 0.71 μg m-3, respectively appearing at the time of 4h, 16h and 38h before the objective time. For the Shanxi source, sensitivity coefficient peaks twice with the values of 1.41 μg m-3 and 0.64 μg m-3, which are seen at the time 24h and 45h before the objective time, respectively.

  12. Transient Seepage for Levee Engineering Analyses

    NASA Astrophysics Data System (ADS)

    Tracy, F. T.

    2017-12-01

    Historically, steady-state seepage analyses have been a key tool for designing levees by practicing engineers. However, with the advances in computer modeling, transient seepage analysis has become a potentially viable tool. A complication is that the levees usually have partially saturated flow, and this is significantly more complicated in transient flow. This poster illustrates four elements of our research in partially saturated flow relating to the use of transient seepage for levee design: (1) a comparison of results from SEEP2D, SEEP/W, and SLIDE for a generic levee cross section common to the southeastern United States; (2) the results of a sensitivity study of varying saturated hydraulic conductivity, the volumetric water content function (as represented by van Genuchten), and volumetric compressibility; (3) a comparison of when soils do and do not exhibit hysteresis, and (4) a description of proper and improper use of transient seepage in levee design. The variables considered for the sensitivity and hysteresis studies are pore pressure beneath the confining layer at the toe, the flow rate through the levee system, and a levee saturation coefficient varying between 0 and 1. Getting results for SEEP2D, SEEP/W, and SLIDE to match proved more difficult than expected. After some effort, the results matched reasonably well. Differences in results were caused by various factors, including bugs, different finite element meshes, different numerical formulations of the system of nonlinear equations to be solved, and differences in convergence criteria. Varying volumetric compressibility affected the above test variables the most. The levee saturation coefficient was most affected by the use of hysteresis. The improper use of pore pressures from a transient finite element seepage solution imported into a slope stability computation was found to be the most grievous mistake in using transient seepage in the design of levees.

  13. Computer-aided diagnosis of pulmonary diseases using x-ray darkfield radiography

    NASA Astrophysics Data System (ADS)

    Einarsdóttir, Hildur; Yaroshenko, Andre; Velroyen, Astrid; Bech, Martin; Hellbach, Katharina; Auweter, Sigrid; Yildirim, Önder; Meinel, Felix G.; Eickelberg, Oliver; Reiser, Maximilian; Larsen, Rasmus; Kjær Ersbøll, Bjarne; Pfeiffer, Franz

    2015-12-01

    In this work we develop a computer-aided diagnosis (CAD) scheme for classification of pulmonary disease for grating-based x-ray radiography. In addition to conventional transmission radiography, the grating-based technique provides a dark-field imaging modality, which utilizes the scattering properties of the x-rays. This modality has shown great potential for diagnosing early stage emphysema and fibrosis in mouse lungs in vivo. The CAD scheme is developed to assist radiologists and other medical experts to develop new diagnostic methods when evaluating grating-based images. The scheme consists of three stages: (i) automatic lung segmentation; (ii) feature extraction from lung shape and dark-field image intensities; (iii) classification between healthy, emphysema and fibrosis lungs. A study of 102 mice was conducted with 34 healthy, 52 emphysema and 16 fibrosis subjects. Each image was manually annotated to build an experimental dataset. System performance was assessed by: (i) determining the quality of the segmentations; (ii) validating emphysema and fibrosis recognition by a linear support vector machine using leave-one-out cross-validation. In terms of segmentation quality, we obtained an overlap percentage (Ω) 92.63  ±  3.65%, Dice Similarity Coefficient (DSC) 89.74  ±  8.84% and Jaccard Similarity Coefficient 82.39  ±  12.62%. For classification, the accuracy, sensitivity and specificity of diseased lung recognition was 100%. Classification between emphysema and fibrosis resulted in an accuracy of 93%, whilst the sensitivity was 94% and specificity 88%. In addition to the automatic classification of lungs, deviation maps created by the CAD scheme provide a visual aid for medical experts to further assess the severity of pulmonary disease in the lung, and highlights regions affected.

  14. Software-based on-site estimation of fractional flow reserve using standard coronary CT angiography data.

    PubMed

    De Geer, Jakob; Sandstedt, Mårten; Björkholm, Anders; Alfredsson, Joakim; Janzon, Magnus; Engvall, Jan; Persson, Anders

    2016-10-01

    The significance of a coronary stenosis can be determined by measuring the fractional flow reserve (FFR) during invasive coronary angiography. Recently, methods have been developed which claim to be able to estimate FFR using image data from standard coronary computed tomography angiography (CCTA) exams. To evaluate the accuracy of non-invasively computed fractional flow reserve (cFFR) from CCTA. A total of 23 vessels in 21 patients who had undergone both CCTA and invasive angiography with FFR measurement were evaluated using a cFFR software prototype. The cFFR results were compared to the invasively obtained FFR values. Correlation was calculated using Spearman's rank correlation, and agreement using intraclass correlation coefficient (ICC). Sensitivity, specificity, accuracy, negative predictive value, and positive predictive value for significant stenosis (defined as both FFR ≤0.80 and FFR ≤0.75) were calculated. The mean cFFR value for the whole group was 0.81 and the corresponding mean invFFR value was 0.84. The cFFR sensitivity for significant stenosis (FFR ≤0.80/0.75) on a per-lesion basis was 0.83/0.80, specificity was 0.76/0.89, and accuracy 0.78/0.87. The positive predictive value was 0.56/0.67 and the negative predictive value was 0.93/0.94. The Spearman rank correlation coefficient was ρ = 0.77 (P < 0.001) and ICC = 0.73 (P < 0.001). This particular CCTA-based cFFR software prototype allows for a rapid, non-invasive on-site evaluation of cFFR. The results are encouraging and cFFR may in the future be of help in the triage to invasive coronary angiography. © The Foundation Acta Radiologica 2015.

  15. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  16. Numerical convergence of the self-diffusion coefficient and viscosity obtained with Thomas-Fermi-Dirac molecular dynamics.

    PubMed

    Danel, J-F; Kazandjian, L; Zérah, G

    2012-06-01

    Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.

  17. Numerical convergence of the self-diffusion coefficient and viscosity obtained with Thomas-Fermi-Dirac molecular dynamics

    NASA Astrophysics Data System (ADS)

    Danel, J.-F.; Kazandjian, L.; Zérah, G.

    2012-06-01

    Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.

  18. Improved Gaussian Beam-Scattering Algorithm

    NASA Technical Reports Server (NTRS)

    Lock, James A.

    1995-01-01

    The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  20. The relationship between coefficient of restitution and state of charge of zinc alkaline primary LR6 batteries [Bouncing alkaline batteries: A basic solution

    DOE PAGES

    Bhadra, S.; Hertzberg, B. J.; Croft, M.; ...

    2015-03-13

    The coefficient of restitution of alkaline batteries had been shown to increase as a function of depth of discharge. In this work, using non-destructive mechanical testing, the change in coefficient of restitution is compared to in situ energy-dispersive x-ray diffraction data to determine the cause of the macroscopic change in coefficient of restitution. The increase in coefficient of restitution correlates to the formation of a percolation pathway of ZnO within the anode of the cell, and that the coefficient of restitution saturates at a value of 0.63 ± .05 at 50% state if charge when the anode has densified intomore » porous ZnO solid. Of note is the sensitivity of coefficient of restitution to the amount of ZnO formation that rivals the sensitivity on in situ energy-dispersive x-ray diffraction spectroscopy.« less

  1. Senstitivity analysis of horizontal heat and vapor transfer coefficients for a cloud-topped marine boundary layer during cold-air outbreaks. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chang, Y. V.

    1986-01-01

    The effects of external parameters on the surface heat and vapor fluxes into the marine atmospheric boundary layer (MABL) during cold-air outbreaks are investigated using the numerical model of Stage and Businger (1981a). These fluxes are nondimensionalized using the horizontal heat (g1) and vapor (g2) transfer coefficient method first suggested by Chou and Atlas (1982) and further formulated by Stage (1983a). In order to simplify the problem, the boundary layer is assumed to be well mixed and horizontally homogeneous, and to have linear shoreline soundings of equivalent potential temperature and mixing ratio. Modifications of initial surface flux estimates, time step limitation, and termination conditions are made to the MABL model to obtain accurate computations. The dependence of g1 and g2 in the cloud topped boundary layer on the external parameters (wind speed, divergence, sea surface temperature, radiative sky temperature, cloud top radiation cooling, and initial shoreline soundings of temperature, and mixing ratio) is studied by a sensitivity analysis, which shows that the uncertainties of horizontal transfer coefficients caused by changes in the parameters are reasonably small.

  2. A comparative analysis of magnetic resonance imaging and high-resolution peripheral quantitative computed tomography of the hand for the detection of erosion repair in rheumatoid arthritis.

    PubMed

    Regensburger, Adrian; Rech, Jürgen; Englbrecht, Matthias; Finzel, Stephanie; Kraus, Sebastian; Hecht, Karolin; Kleyer, Arnd; Haschka, Judith; Hueber, Axel J; Cavallaro, Alexander; Schett, Georg; Faustini, Francesca

    2015-09-01

    To investigate whether MRI allows the detection of osteosclerosis as a sign of repair of bone erosions compared with high-resolution peripheral quantitative computed tomography (HR-pQCT) as a reference and whether the presence of osteosclerosis on HR-pQCT is linked to synovitis and osteitis on MRI. A total of 103 RA patients underwent HR-pQCT and MRI of the dominant hand. The presence and size of erosions and the presence and extent (grades 0-2) of osteosclerosis were assessed by both imaging modalities, focusing on MCP 2 and 3 and wrist joints. By MRI, the presence and grading of osteitis and synovitis were assessed according to the Rheumatoid Arthritis MRI Score (RAMRIS). Parallel evaluation was feasible by both modalities on 126 bone erosions. Signs of osteosclerosis were found on 87 erosions by HR-pQCT and on 22 by MRI. False-positive results (MRI(+)CT(-)) accounted for 3%, while false-negative results (MRI(-)CT(+)) accounted for 76%. MRI sensitivity for the detection of osteosclerosis was 24% and specificity was 97%. The semi-quantitative scoring of osteosclerosis was reliable between MRI and HR-pQCT [intraclass correlation coefficient 0.917 (95% CI 0.884, 0.941), P < 0.001]. The presence of osteosclerosis on HR-pQCT showed a trend towards an inverse relationship to the occurrence and extent of osteitis on MRI [χ(2)(1) = 3.285; ϕ coefficient = -0.124; P = 0.070] but not to synovitis [χ(2)(1) = 0.039; ϕ coefficient = -0.14; P = 0.844]. MRI can only rarely detect osteosclerosis associated with bone erosions in RA. Indeed, the sensitivity compared with HR-pQCT is limited, while the specificity is high. The presence of osteitis makes osteosclerosis more unlikely, whereas the presence of synovitis is not related to osteosclerosis. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Update of aircraft profile data for the Integrated Noise Model computer program, vol 1: final report

    DOT National Transportation Integrated Search

    1992-03-01

    This report provides aircraft takeoff and landing profiles, aircraft aerodynamic performance coefficients and engine performance coefficients for the aircraft data base (Database 9) in the Integrated Noise Model (INM) computer program. Flight profile...

  4. Presentation of computer code SPIRALI for incompressible, turbulent, plane and spiral grooved cylindrical and face seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.

    1994-01-01

    A viewgraph presentation is made showing the capabilities of the computer code SPIRALI. Overall capabilities of SPIRALI include: computes rotor dynamic coefficients, flow, and power loss for cylindrical and face seals; treats turbulent, laminar, Couette, and Poiseuille dominated flows; fluid inertia effects are included; rotor dynamic coefficients in three (face) or four (cylindrical) degrees of freedom; includes effects of spiral grooves; user definable transverse film geometry including circular steps and grooves; independent user definable friction factor models for rotor and stator; and user definable loss coefficients for sudden expansions and contractions.

  5. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  6. Uncertainty propagation for SPECT/CT-based renal dosimetry in 177Lu peptide receptor radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina

    2015-11-01

    A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.

  7. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  8. Skin friction under pressure. The role of micromechanics

    NASA Astrophysics Data System (ADS)

    Leyva-Mendivil, Maria F.; Lengiewicz, Jakub; Limbert, Georges

    2018-03-01

    The role of contact pressure on skin friction has been documented in multiple experimental studies. Skin friction significantly raises in the low-pressure regime as load increases while, after a critical pressure value is reached, the coefficient of friction of skin against an external surface becomes mostly insensitive to contact pressure. However, up to now, no study has elucidated the qualitative and quantitative nature of the interplay between contact pressure, the material and microstructural properties of the skin, the size of an indenting slider and the resulting measured macroscopic coefficient of friction. A mechanistic understanding of these aspects is essential for guiding the rational design of products intended to interact with the skin through optimally-tuned surface and/or microstructural properties. Here, an anatomically-realistic 2D multi-layer finite element model of the skin was embedded within a computational contact homogenisation procedure. The main objective was to investigate the sensitivity of macroscopic skin friction to the parameters discussed above, in addition to the local (i.e. microscopic) coefficient of friction defined at skin asperity level. This was accomplished via the design of a large-scale computational experiment featuring 312 analyses. Results confirmed the potentially major role of finite deformations of skin asperities on the resulting macroscopic friction. This effect was shown to be modulated by the level of contact pressure and relative size of skin surface asperities compared to those of a rigid slider. The numerical study also corroborated experimental observations concerning the existence of two contact pressure regimes where macroscopic friction steeply and non-linearly increases up to a critical value, and then remains approximately constant as pressure increases further. The proposed computational modelling platform offers attractive features which are beyond the reach of current analytical models of skin friction, namely, the ability to accommodate arbitrary kinematics, non-linear constitutive properties and the complex skin microstructure.

  9. Gold nanoclusters as contrast agents for fluorescent and X-ray dual-modality imaging.

    PubMed

    Zhang, Aili; Tu, Yu; Qin, Songbing; Li, Yan; Zhou, Juying; Chen, Na; Lu, Qiang; Zhang, Bingbo

    2012-04-15

    Multimodal imaging technique is an alternative approach to improve sensitivity of early cancer diagnosis. In this study, highly fluorescent and strong X-ray absorption coefficient gold nanoclusters (Au NCs) are synthesized as dual-modality imaging contrast agents (CAs) for fluorescent and X-ray dual-modality imaging. The experimental results show that the as-prepared Au NCs are well constructed with ultrasmall sizes, reliable fluorescent emission, high computed tomography (CT) value and fine biocompatibility. In vivo imaging results indicate that the obtained Au NCs are capable of fluorescent and X-ray enhanced imaging. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Space shuttle SRM plume expansion sensitivity analysis. [flow characteristics of exhaust gases from solid propellant rocket engines

    NASA Technical Reports Server (NTRS)

    Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.

  11. Inter-annual and spatial variability of Hamon potential evapotranspiration model coefficients

    USGS Publications Warehouse

    McCabe, Gregory J.; Hay, Lauren E.; Bock, Andy; Markstrom, Steven L.; Atkinson, R. Dwight

    2015-01-01

    Monthly calibrated values of the Hamon PET coefficient (C) are determined for 109,951 hydrologic response units (HRUs) across the conterminous United States (U.S.). The calibrated coefficient values are determined by matching calculated mean monthly Hamon PET to mean monthly free-water surface evaporation. For most locations and months the calibrated coefficients are larger than the standard value reported by Hamon. The largest changes in the coefficients were for the late winter/early spring and fall months, whereas the smallest changes were for the summer months. Comparisons of PET computed using the standard value of C and computed using calibrated values of C indicate that for most of the conterminous U.S. PET is underestimated using the standard Hamon PET coefficient, except for the southeastern U.S.

  12. Experimental/Computational Approach to Accommodation Coefficients and its Application to Noble Gases on Aluminum Surface (Preprint)

    DTIC Science & Technology

    2009-02-03

    computational approach to accommodation coefficients and its application to noble gases on aluminum surface Nathaniel Selden Uruversity of Southern Cahfornia, Los ...8217 ,. 0.’ a~ .......,..,P. • " ,,-0, "p"’U".. ,Po"D.’ 0.’P.... uro . P." FIG. 5: Experimental and computed radiometri~ force for argon (left), xenon

  13. Evaluation of Ground Vibrations Induced by Military Noise Sources

    DTIC Science & Technology

    2006-08-01

    1 Task 2—Determine the acoustic -to-seismic coupling coefficients C1 and C2 ...................... 1 Task 3—Computational modeling ...Determine the acoustic -to-seismic coupling coefficients C1 and C2 ....................45 Task 3—Computational modeling of acoustically induced ground...ground conditions. Task 3—Computational modeling of acoustically induced ground motion The simple model of blast sound interaction with the

  14. Analysis of diffential absorption lidar technique for measurements of anhydrous hydrogen chloride from solid rocket motors using a deuterium fluoride laser

    NASA Technical Reports Server (NTRS)

    Bair, C. H.; Allario, F.

    1977-01-01

    An active optical technique (differential absorption lidar (DIAL)) for detecting, ranging, and quantifying the concentration of anhydrous HCl contained in the ground cloud emitted by solid rocket motors (SRM) is evaluated. Results are presented of an experiment in which absorption coefficients of HCl were measured for several deuterium fluoride (DF) laser transitions demonstrating for the first time that a close overlap exists between the 2-1 P(3) vibrational transition of the DF laser and the 1-0 P(6) absorption line of HCl, with an absorption coefficient of 5.64 (atm-cm) to the -1 power. These measurements show that the DF laser can be an appropriate radiation source for detecting HCl in a DIAL technique. Development of a mathematical computer model to predict the sensitivity of DIAL for detecting anhydrous HCl in the ground cloud is outlined, and results that assume a commercially available DF laser as the radiation source are presented.

  15. Analysis and optimal design of moisture sensor for rice grain moisture measurement

    NASA Astrophysics Data System (ADS)

    Jain, Sweety; Mishra, Pankaj Kumar; Thakare, Vandana Vikas

    2018-04-01

    The analysis and design of a microstrip sensor for accurate determination of moisture content (MC) in rice grains based on oven drying technique, this technique is easy, fast and less time-consuming to other techniques. The sensor is designed with low insertion loss, reflection coefficient and maximum gain is -35dB and 5.88dB at 2.68GHz as well as discussed all the parameters such as axial ratio, maximum gain, smith chart etc, which is helpful for analysis the moisture measurement. The variation in percentage of moisture measurement with magnitude and phase of transmission coefficient is investigated at selected frequencies. The microstrip moisture sensor consists of one layer: substrate FR4, thickness 1.638 is simulated by computer simulated technology microwave studio (CST MWS). It is concluded that the proposed sensor is suitable for development as a complete sensor and to estimate the optimum moisture content of rice grains with accurately, sensitivity, compact, versatile and suitable for determining the moisture content of other crops and agriculture products.

  16. Adequacy of selected evapotranspiration approximations for hydrologic simulation

    USGS Publications Warehouse

    Sumner, D.M.

    2006-01-01

    Evapotranspiration (ET) approximations, usually based on computed potential ET (PET) and diverse PET-to-ET conceptualizations, are routinely used in hydrologic analyses. This study presents an approach to incorporate measured (actual) ET data, increasingly available using micrometeorological methods, to define the adequacy of ET approximations for hydrologic simulation. The approach is demonstrated at a site where eddy correlation-measured ET values were available. A baseline hydrologic model incorporating measured ET values was used to evaluate the sensitivity of simulated water levels, subsurface recharge, and surface runoff to error in four ET approximations. An annually invariant pattern of mean monthly vegetation coefficients was shown to be most effective, despite the substantial year-to-year variation in measured vegetation coefficients. The temporal variability of available water (precipitation minus ET) at the humid, subtropical site was largely controlled by the relatively high temporal variability of precipitation, benefiting the effectiveness of coarse ET approximations, a result that is likely to prevail at other humid sites.

  17. Communication: A method to compute the transport coefficient of pure fluids diffusing through planar interfaces from equilibrium molecular dynamics simulations.

    PubMed

    Vermorel, Romain; Oulebsir, Fouad; Galliero, Guillaume

    2017-09-14

    The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.

  18. Preserving differential privacy for similarity measurement in smart environments.

    PubMed

    Wong, Kok-Seng; Kim, Myung Ho

    2014-01-01

    Advances in both sensor technologies and network infrastructures have encouraged the development of smart environments to enhance people's life and living styles. However, collecting and storing user's data in the smart environments pose severe privacy concerns because these data may contain sensitive information about the subject. Hence, privacy protection is now an emerging issue that we need to consider especially when data sharing is essential for analysis purpose. In this paper, we consider the case where two agents in the smart environment want to measure the similarity of their collected or stored data. We use similarity coefficient function (F SC) as the measurement metric for the comparison with differential privacy model. Unlike the existing solutions, our protocol can facilitate more than one request to compute F SC without modifying the protocol. Our solution ensures privacy protection for both the inputs and the computed F SC results.

  19. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  20. Retrieval of Macro- and Micro-Physical Properties of Oceanic Hydrosols from Polarimetric Observations

    NASA Technical Reports Server (NTRS)

    Ibrahim, Amir; Gilerson, Alexander; Chowdhary, Jacek; Ahmed, Samir

    2016-01-01

    Remote sensing has mainly relied on measurements of scalar radiance and its spectral and angular features to retrieve micro- and macro-physical properties of aerosols/hydrosols. However, it is recognized that measurements that include the polarimetric characteristics of light provide more intrinsic information about particulate scattering. To take advantage of this, we used vector radiative transfer (VRT) simulations and developed an analytical relationship to retrieve the macro and micro-physical properties of the oceanic hydrosols. Specifically, we investigated the relationship between the observed degree of linear polarization (DoLP) and the ratio of attenuation-to- absorption coefficients (c/a) in water, from which the scattering coefficient can be readily computed (b equals c minus a), after retrieving a. This relationship was parameterized for various scattering geometries, including sensor zenith/azimuth angles relative to the Sun's principal plane, and for varying Sun zenith angles. An inversion method was also developed for the retrieval of the microphysical properties of hydrosols, such as the bulk refractive index and the particle size distribution. The DoLP vs c/a relationship was tested and validated against in-situ measurements of underwater light polarization obtained by a custom-built polarimeter and measurements of the coefficients a and c, obtained using an in-water WET (Western Environmental Technologies) Labs ac-s (attenuation coefficients In-Situ Spectrophotometer) instrument package. These measurements confirmed the validity of the approach, with retrievals of attenuation coefficients showing a high coefficient of determination depending on the wavelength. We also performed a sensitivity analysis of the DoLP at the Top of Atmosphere (TOA) over coastal waters showing the possibility of polarimetric remote sensing application for ocean color.

  1. Effects of the bottom boundary condition in numerical investigations of dense water cascading on a slope

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Alendal, Guttorm; Avlesen, Helge; Thiem, Øyvind

    2018-05-01

    The flow of dense water along continental slopes is considered. There is a large literature on the topic based on observations and laboratory experiments. In addition, there are many analytical and numerical studies of dense water flows. In particular, there is a sequence of numerical investigations using the dynamics of overflow mixing and entrainment (DOME) setup. In these papers, the sensitivity of the solutions to numerical parameters such as grid size and numerical viscosity coefficients and to the choices of methods and models is investigated. In earlier DOME studies, three different bottom boundary conditions and a range of vertical grid sizes are applied. In other parts of the literature on numerical studies of oceanic gravity currents, there are statements that appear to contradict choices made on bottom boundary conditions in some of the DOME papers. In the present study, we therefore address the effects of the bottom boundary condition and vertical resolution in numerical investigations of dense water cascading on a slope. The main finding of the present paper is that it is feasible to capture the bottom Ekman layer dynamics adequately and cost efficiently by using a terrain-following model system using a quadratic drag law with a drag coefficient computed to give near-bottom velocity profiles in agreement with the logarithmic law of the wall. Many studies of dense water flows are performed with a quadratic bottom drag law and a constant drag coefficient. It is shown that when using this bottom boundary condition, Ekman drainage will not be adequately represented. In other studies of gravity flow, a no-slip bottom boundary condition is applied. With no-slip and a very fine resolution near the seabed, the solutions are essentially equal to the solutions obtained with a quadratic drag law and a drag coefficient computed to produce velocity profiles matching the logarithmic law of the wall. However, with coarser resolution near the seabed, there may be a substantial artificial blocking effect when using no-slip.

  2. Gas-surface interactions using accommodation coefficients for a dilute and a dense gas in a micro- or nanochannel: heat flux predictions using combined molecular dynamics and Monte Carlo techniques.

    PubMed

    Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D

    2014-05-01

    The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.

  3. Update of aircraft profile data for the Integrated Noise Model computer program, vol. 2 : appendix A aircraft takeoff and landing profiles

    DOT National Transportation Integrated Search

    1992-03-01

    This report provides aircraft takeoff and landing profiles, aircraft aerodynamic performance coefficients and engine performance coefficients for the aircraft data base (Database 9) in the Integrated Noise Model (INM) computer program. Flight profile...

  4. Observability Analysis of a MEMS INS/GPS Integration System with Gyroscope G-Sensitivity Errors

    PubMed Central

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-01-01

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously. PMID:25171122

  5. Observability analysis of a MEMS INS/GPS integration system with gyroscope G-sensitivity errors.

    PubMed

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-08-28

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously.

  6. Sensitivity of the reference evapotranspiration to key climatic variables during the growing season in the Ejina oasis northwest China.

    PubMed

    Hou, Lan-Gong; Zou, Song-Bing; Xiao, Hong-Lang; Yang, Yong-Gang

    2013-01-01

    The standardized FAO56 Penman-Monteith model, which has been the most reasonable method in both humid and arid climatic conditions, provides reference evapotranspiration (ETo) estimates for planning and efficient use of agricultural water resources. And sensitivity analysis is important in understanding the relative importance of climatic variables to the variation of reference evapotranspiration. In this study, a non-dimensional relative sensitivity coefficient was employed to predict responses of ETo to perturbations of four climatic variables in the Ejina oasis northwest China. A 20-year historical dataset of daily air temperature, wind speed, relative humidity and daily sunshine duration in the Ejina oasis was used in the analysis. Results have shown that daily sensitivity coefficients exhibited large fluctuations during the growing season, and shortwave radiation was the most sensitive variable in general for the Ejina oasis, followed by air temperature, wind speed and relative humidity. According to this study, the response of ETo can be preferably predicted under perturbation of air temperature, wind speed, relative humidity and shortwave radiation by their sensitivity coefficients.

  7. NASA Glenn Coefficients for Calculating Thermodynamic Properties of Individual Species

    NASA Technical Reports Server (NTRS)

    McBride, Bonnie J.; Zehe, Michael J.; Gordon, Sanford

    2002-01-01

    This report documents the library of thermodynamic data used with the NASA Glenn computer program CEA (Chemical Equilibrium with Applications). This library, containing data for over 2000 solid, liquid, and gaseous chemical species for temperatures ranging from 200 to 20,000 K, is available for use with other computer codes as well. The data are expressed as least-squares coefficients to a seven-term functional form for C((sup o)(sub p)) (T) / R with integration constants for H (sup o) (T) / RT and S(sup o) (T) / R. The NASA Glenn computer program PAC (Properties and Coefficients) was used to calculate thermodynamic functions and to generate the least-squares coefficients. PAC input was taken from a variety of sources. A complete listing of the database is given along with a summary of thermodynamic properties at 0 and 298.15 K.

  8. On computing Laplace's coefficients and their derivatives.

    NASA Astrophysics Data System (ADS)

    Gerasimov, I. A.; Vinnikov, E. L.

    The algorithm of computing Laplace's coefficients and their derivatives is proposed with application of recurrent relations. The A.G.M.-method is used for the calculation of values L0(0), L0(1). The FORTRAN-program corresponding to the algorithm is given. The precision control was provided with numerical integrating by Simpsons method. The behavior of Laplace's coefficients and their third derivatives whith varying indices K, n for fixed values of the α-parameter is presented graphically.

  9. Tracking a Severe Pollution Event in Beijing in December 2016 with the GRAPES-CUACE Adjoint Model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; An, Xingqin; Zhai, Shixian; Sun, Zhaobin

    2018-02-01

    We traced the adjoint sensitivity of a severe pollution event in December 2016 in Beijing using the adjoint model of the GRAPES-CUACE (Global/Regional Assimilation and Prediction System coupled with the China Meteorological Administration Unified Atmospheric Chemistry Environmental Forecasting System). The key emission sources and periods affecting this severe pollution event are analyzed. For comaprison, we define 2000 Beijing Time 3 December 2016 as the objective time when PM2.5 reached the maximum concentration in Beijing. It is found that the local hourly sensitivity coefficient amounts to a peak of 9.31 μg m-3 just 1 h before the objective time, suggesting that PM2.5 concentration responds rapidly to local emissions. The accumulated sensitivity coefficient in Beijing is large during the 20-h period prior to the objective time, showing that local emissions are the most important in this period. The accumulated contribution rates of emissions from Beijing, Tianjin, Hebei, and Shanxi are 34.2%, 3.0%, 49.4%, and 13.4%, respectively, in the 72-h period before the objective time. The evolution of hourly sensitivity coefficient shows that the main contribution from the Tianjin source occurs 1-26 h before the objective time and its peak hourly contribution is 0.59 μg m-3 at 4 h before the objective time. The main contributions of the Hebei and Shanxi emission sources occur 1-54 and 14-53 h, respectively, before the objective time and their hourly sensitivity coefficients both show periodic fluctuations. The Hebei source shows three sensitivity coefficient peaks of 3.45, 4.27, and 0.71 μg m-3 at 4, 16, and 38 h before the objective time, respectively. The sensitivity coefficient of the Shanxi source peaks twice, with values of 1.41 and 0.64 μg m-3 at 24 and 45 h before the objective time, respectively. Overall, the adjoint model is effective in tracking the crucial sources and key periods of emissions for the severe pollution event.

  10. [Validity of expired carbon monoxide and urine cotinine using dipstick method to assess smoking status].

    PubMed

    Park, Su San; Lee, Ju Yul; Cho, Sung-Il

    2007-07-01

    We investigated the validity of the dipstick method (Mossman Associates Inc. USA) and the expired CO method to distinguish between smokers and nonsmokers. We also elucidated the related factors of the two methods. This study included 244 smokers and 50 ex-smokers, recruited from smoking cessation clinics at 4 local public health centers, who had quit for over 4 weeks. We calculated the sensitivity, specificity and Kappa coefficient of each method for validity. We obtained ROC curve, predictive value and agreement to determine the cutoff of expired air CO method. Finally, we elucidated the related factors and compared their effect powers using the standardized regression coefficient. The dipstick method showed a sensitivity of 92.6%, specificity of 96.0% and Kappa coefficient of 0.79. The best cutoff value to distinguish smokers was 5-6 ppm. At 5 ppm, the expired CO method showed a sensitivity of 94.3%, specificity of 82.0% and Kappa coefficient of 0.73. And at 6 ppm, sensitivity, specificity and Kappa coefficient were 88.5%, 86.0% and 0.64, respectively. Therefore, the dipstick method had higher sensitivity and specificity than the expired CO method. The dipstick and expired CO methods were significantly increased with increasing smoking amount. With longer time since the last smoking, expired CO showed a rapid decrease after 4 hours, whereas the dipstick method showed relatively stable levels for more than 4 hours. The dipstick and expired CO methods were both good indicators for assessing smoking status. However, the former showed higher sensitivity and specificity and stable levels over longer hours after smoking, compared to the expired CO method.

  11. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  12. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The possibilities of improvement in the sensitivity of cancer fluorescence diagnostics by computer image processing

    NASA Astrophysics Data System (ADS)

    Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander

    2008-02-01

    Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.

  14. Description of a Computer Program Written for Approach and Landing Test Post Flight Data Extraction of Proximity Separation Aerodynamic Coefficients and Aerodynamic Data Base Verification

    NASA Technical Reports Server (NTRS)

    Homan, D. J.

    1977-01-01

    A computer program written to calculate the proximity aerodynamic force and moment coefficients of the Orbiter/Shuttle Carrier Aircraft (SCA) vehicles based on flight instrumentation is described. The ground reduced aerodynamic coefficients and instrumentation errors (GRACIE) program was developed as a tool to aid in flight test verification of the Orbiter/SCA separation aerodynamic data base. The program calculates the force and moment coefficients of each vehicle in proximity to the other, using the load measurement system data, flight instrumentation data and the vehicle mass properties. The uncertainty in each coefficient is determined, based on the quoted instrumentation accuracies. A subroutine manipulates the Orbiter/747 Carrier Separation Aerodynamic Data Book to calculate a comparable set of predicted coefficients for comparison to the calculated flight test data.

  15. Cryogenic fiber optic temperature sensor and method of manufacturing the same

    NASA Technical Reports Server (NTRS)

    Kochergin, Vladimir (Inventor)

    2012-01-01

    This invention teaches the fiber optic sensors temperature sensors for cryogenic temperature range with improved sensitivity and resolution, and method of making said sensors. In more detail, the present invention is related to enhancement of temperature sensitivity of fiber optic temperature sensors at cryogenic temperatures by utilizing nanomaterials with a thermal expansion coefficient that is smaller than the thermal expansion coefficient of the optical fiber but larger in absolute value than the thermal expansion coefficient of the optical fiber at least over a range of temperatures.

  16. Repeatability of quantitative FDG-PET/CT and contrast-enhanced CT in recurrent ovarian carcinoma: test-retest measurements for tumor FDG uptake, diameter, and volume.

    PubMed

    Rockall, Andrea G; Avril, Norbert; Lam, Raymond; Iannone, Robert; Mozley, P David; Parkinson, Christine; Bergstrom, Donald; Sala, Evis; Sarker, Shah-Jalal; McNeish, Iain A; Brenton, James D

    2014-05-15

    Repeatability of baseline FDG-PET/CT measurements has not been tested in ovarian cancer. This dual-center, prospective study assessed variation in tumor 2[18F]fluoro-2-deoxy-D-glucose (FDG) uptake, tumor diameter, and tumor volume from sequential FDG-PET/CT and contrast-enhanced computed tomography (CECT) in patients with recurrent platinum-sensitive ovarian cancer. Patients underwent two pretreatment baseline FDG-PET/CT (n = 21) and CECT (n = 20) at two clinical sites with different PET/CT instruments. Patients were included if they had at least one target lesion in the abdomen with a standardized uptake value (SUV) maximum (SUVmax) of ≥ 2.5 and a long axis diameter of ≥ 15 mm. Two independent reading methods were used to evaluate repeatability of tumor diameter and SUV uptake: on site and at an imaging clinical research organization (CRO). Tumor volume reads were only performed by CRO. In each reading set, target lesions were independently measured on sequential imaging. Median time between FDG-PET/CT was two days (range 1-7). For site reads, concordance correlation coefficients (CCC) for SUVmean, SUVmax, and tumor diameter were 0.95, 0.94, and 0.99, respectively. Repeatability coefficients were 16.3%, 17.3%, and 8.8% for SUVmean, SUVmax, and tumor diameter, respectively. Similar results were observed for CRO reads. Tumor volume CCC was 0.99 with a repeatability coefficient of 28.1%. There was excellent test-retest repeatability for FDG-PET/CT quantitative measurements across two sites and two independent reading methods. Cutoff values for determining change in SUVmean, SUVmax, and tumor volume establish limits to determine metabolic and/or volumetric response to treatment in platinum-sensitive relapsed ovarian cancer. ©2014 American Association for Cancer Research.

  17. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  18. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  19. Analytical determination of propeller performance degradation due to ice accretion

    NASA Technical Reports Server (NTRS)

    Miller, T. L.

    1986-01-01

    A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.

  20. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  1. CFD Simulation On The Pressure Distribution For An Isolated Single-Story House With Extension: Grid Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.

    2018-04-01

    Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.

  2. Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows

    NASA Astrophysics Data System (ADS)

    Njobuenwu, Derrick O.; Fairweather, Michael

    2017-08-01

    An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.

  3. Comparative Approach of MRI-Based Brain Tumor Segmentation and Classification Using Genetic Algorithm.

    PubMed

    Bahadure, Nilesh Bhaskarrao; Ray, Arun Kumar; Thethi, Har Pal

    2018-01-17

    The detection of a brain tumor and its classification from modern imaging modalities is a primary concern, but a time-consuming and tedious work was performed by radiologists or clinical supervisors. The accuracy of detection and classification of tumor stages performed by radiologists is depended on their experience only, so the computer-aided technology is very important to aid with the diagnosis accuracy. In this study, to improve the performance of tumor detection, we investigated comparative approach of different segmentation techniques and selected the best one by comparing their segmentation score. Further, to improve the classification accuracy, the genetic algorithm is employed for the automatic classification of tumor stage. The decision of classification stage is supported by extracting relevant features and area calculation. The experimental results of proposed technique are evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on segmentation score, accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 92.03% accuracy, 91.42% specificity, 92.36% sensitivity, and an average segmentation score between 0.82 and 0.93 demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 93.79% dice similarity index coefficient, which indicates better overlap between the automated extracted tumor regions with manually extracted tumor region by radiologists.

  4. The Role of Collateral Paths in Long-Range Diffusion of 3He in Lungs

    PubMed Central

    Conradi, Mark S.; Yablonskiy, Dmitriy A.; Woods, Jason C.; Gierada, David S.; Bartel, Seth-Emil T.; Haywood, Susan E.; Menard, Christopher

    2008-01-01

    Rationale and Objectives The hyperpolarized 3He long-range diffusion coefficient (LRDC) in lungs is sensitive to changes in lung structure due to emphysema, reflecting the increase in collateral paths resulting from tissue destruction. However, no clear understanding of LRDC in healthy lungs has emerged. Here we compare LRDC measured in healthy lungs with computer simulations of diffusion along the airway tree with no collateral connections. Materials and Methods Computer simulations of diffusion of spatially modulated spin magnetization were performed in computer generated, symmetric-branching models of lungs and compared with existing LRDC measurements in canine and human lungs. Results The simulations predict LRDC values of order 0.001 cm2/s, approximately 20 times smaller than the measured LRDC. We consider and rule out possible mechanisms for LRDC not included in the simulations: incomplete breath hold, cardiac motion, and passage of dissolved 3He through airway walls. However, a very low density of small (micron) holes in the airways is shown to account for the observed LRDC. Conclusion It is proposed that LRDC in healthy lungs is determined by small collateral pathways. PMID:18486004

  5. Short-term reproducibility of computed tomography-based lung density measurements in alpha-1 antitrypsin deficiency and smokers with emphysema.

    PubMed

    Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A

    2004-07-01

    To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.

  6. Detection of structural damage in multiwire cables by monitoring the entropy evolution of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Ibáñez, Flor; Baltazar, Arturo; Mijarez, Rito; Aranda, Jorge

    2015-03-01

    Multiwire cables are widely used in important civil structures. Since they are exposed to several dynamic and static loads, their structural health can be compromised. The cables can also be submitted to mechanical contact, tension and energy propagation in addition to changes in size and material within their wires. Due to the critical role played by multiwire cables, it is necessary to develop a non-destructive health monitoring method to maintain their structure and proper performance. Ultrasonic inspection using guided waves is a promising non-destructive damage monitoring technique for rods, single wires and multiwire cables. The propagated guided waves are composed by an infinite number of vibrational modes making their analysis difficult. In this work, an entropy-based method to identify small changes in non-stationary signals is proposed. A system to capture and post-process acoustic signals is implemented. The Discrete Wavelet Transform (DWT) is computed in order to obtain the reconstructed wavelet coefficients of the signals and to analyze the energy at different scales. The feasibility of using the concept of entropy evolution of non-stationary signals to detect damage in multiwire cables is evaluated. The results show that there is a high correlation between the entropy value and damage level of the cable. The proposed method has low sensitivity to noise and reduces the computational complexity found in a typical time-frequency analysis.

  7. Virial coefficients and demixing in the Asakura-Oosawa model.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz

    2015-01-07

    The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.

  8. Numerical investigation of rarefaction effects in the vicinity of a sharp leading edge

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Gao, Zhenxun; Lee, Chunhian

    2014-12-01

    This paper presents a study of rarefaction effect on hypersonic flow over a sharp leading edge. Both continuum approach and kinetic method: a widely spread commercial Computational Fluid Dynamics-Navior-Stokes-Fourier (CFD-NSF) software - Fluent together with a direct simulation Monte Carlo (DSMC) code developed by the authors are employed for simulation of transition regime with Knudsen number ranging from 0.005 to 0.2. It is found that Fluent can predict the wall fluxes in the case of hypersonic argon flow over the sharp leading edge for the lowest Kn case (Kn = 0.005) in current paper while for other cases it also has a good agreement with DSMC except at the location near the sharp leading edge. Among all of the wall fluxes, it is found that coefficient of pressure is the most sensitive to rarefaction while heat transfer is the least one. A parameter based on translational nonequilibrium and a cut-off value of 0.34 is proposed for continuum breakdown in this paper. The structure of entropy and velocity profile in boundary layer is analyzed. Also, it is found that the ratio of heat transfer coefficient to skin friction coefficient remains uniform along the surface for the four cases in this paper.

  9. Testing Lorentz Symmetry with Lunar Laser Ranging

    NASA Astrophysics Data System (ADS)

    Bourgoin, A.; Hees, A.; Bouquillon, S.; Le Poncin-Lafitte, C.; Francou, G.; Angonin, M.-C.

    2016-12-01

    Lorentz symmetry violations can be parametrized by an effective field theory framework that contains both general relativity and the standard model of particle physics called the standard-model extension (SME). We present new constraints on pure gravity SME coefficients obtained by analyzing lunar laser ranging (LLR) observations. We use a new numerical lunar ephemeris computed in the SME framework and we perform a LLR data analysis using a set of 20 721 normal points covering the period of August, 1969 to December, 2013. We emphasize that linear combination of SME coefficients to which LLR data are sensitive and not the same as those fitted in previous postfit residuals analysis using LLR observations and based on theoretical grounds. We found no evidence for Lorentz violation at the level of 10-8 for s¯T X, 10-12 for s¯X Y and s¯X Z, 10-11 for s¯X X-s¯Y Y and s¯X X+s¯Y Y-2 s¯Z Z-4.5 s¯Y Z, and 10-9 for s¯T Y+0.43 s¯T Z. We improve previous constraints on SME coefficient by a factor up to 5 and 800 compared to postfit residuals analysis of respectively binary pulsars and LLR observations.

  10. Automated Detector of High Frequency Oscillations in Epilepsy Based on Maximum Distributed Peak Points.

    PubMed

    Ren, Guo-Ping; Yan, Jia-Qing; Yu, Zhi-Xin; Wang, Dan; Li, Xiao-Nan; Mei, Shan-Shan; Dai, Jin-Dong; Li, Xiao-Li; Li, Yun-Lin; Wang, Xiao-Fei; Yang, Xiao-Feng

    2018-02-01

    High frequency oscillations (HFOs) are considered as biomarker for epileptogenicity. Reliable automation of HFOs detection is necessary for rapid and objective analysis, and is determined by accurate computation of the baseline. Although most existing automated detectors measure baseline accurately in channels with rare HFOs, they lose accuracy in channels with frequent HFOs. Here, we proposed a novel algorithm using the maximum distributed peak points method to improve baseline determination accuracy in channels with wide HFOs activity ranges and calculate a dynamic baseline. Interictal ripples (80-200[Formula: see text]Hz), fast ripples (FRs, 200-500[Formula: see text]Hz) and baselines in intracerebral EEGs from seven patients with intractable epilepsy were identified by experienced reviewers and by our computer-automated program, and the results were compared. We also compared the performance of our detector to four well-known detectors integrated in RIPPLELAB. The sensitivity and specificity of our detector were, respectively, 71% and 75% for ripples and 66% and 84% for FRs. Spearman's rank correlation coefficient comparing automated and manual detection was [Formula: see text] for ripples and [Formula: see text] for FRs ([Formula: see text]). In comparison to other detectors, our detector had a relatively higher sensitivity and specificity. In conclusion, our automated detector is able to accurately calculate a dynamic iEEG baseline in different HFO activity channels using the maximum distributed peak points method, resulting in higher sensitivity and specificity than other available HFO detectors.

  11. Analytical study of the cruise performance of a class of remotely piloted, microwave-powered, high-altitude airplane platforms

    NASA Technical Reports Server (NTRS)

    Morris, C. E. K., Jr.

    1981-01-01

    Each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam, followed by gliding flight back to a minimum altitude. Parameter variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the power transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and increase the lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.

  12. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  13. On the design of wave digital filters with low sensitivity properties.

    NASA Technical Reports Server (NTRS)

    Renner, K.; Gupta, S. C.

    1973-01-01

    The wave digital filter patterned after doubly terminated maximum available power (MAP) networks by means of the Richard's transformation has been shown to have low-coefficient-sensitivity properties. This paper examines the exact nature of the relationship between the wave-digital-filter structure and the MAP networks and how the sensitivity property arises, which permits implementation of the digital structure with a lower coefficient word length than that possible with the conventional structures. The proper design procedure is specified and the nature of the unique complementary outputs is discussed. Finally, an example is considered which illustrates the design, the conversion techniques, and the low sensitivity properties.

  14. [Research on optimization of mathematical model of flow injection-hydride generation-atomic fluorescence spectrometry].

    PubMed

    Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li

    2014-01-01

    Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.

  15. Aeroelasticity in Turbomachines. Comparison of Theoretical and Experimental Cascade Results.

    DTIC Science & Technology

    1986-01-01

    Y~x)csn1#(x)) It should be noted here that, in computing the blade surface pressure distribution, only components, and not amplitudes or phase angles...oscillation, done on the system is obtained by computing -i ChV+Cc+rhU+c_,Vh (12) Expressed in this way, the aerodynamic work coefficients c., cVh, cva...predictions), so the aerodynamic damping coefficient can easily be computed and plotted. This information is useful to the turbomachine designer for

  16. Turbulence model sensitivity and scour gap effect of unsteady flow around pipe: a CFD study.

    PubMed

    Ali, Abbod; Sharma, R K; Ganesan, P; Akib, Shatirah

    2014-01-01

    A numerical investigation of incompressible and transient flow around circular pipe has been carried out at different five gap phases. Flow equations such as Navier-Stokes and continuity equations have been solved using finite volume method. Unsteady horizontal velocity and kinetic energy square root profiles are plotted using different turbulence models and their sensitivity is checked against published experimental results. Flow parameters such as horizontal velocity under pipe, pressure coefficient, wall shear stress, drag coefficient, and lift coefficient are studied and presented graphically to investigate the flow behavior around an immovable pipe and scoured bed.

  17. Influential factors on thermoacoustic efficiency of multilayered graphene film loudspeakers for optimal design

    NASA Astrophysics Data System (ADS)

    Xing, Qianhe; Li, Shuang; Fan, Xueliang; Bian, Anhua; Cao, Shi-Jie; Li, Cheng

    2017-09-01

    Graphene thermoacoustic loudspeakers, composed of a graphene film on a substrate, generate sound with heat. Improving thermoacoustic efficiency of graphene speakers is a goal for optimal design. In this work, we first modified the existing TA model with respect to small thermal wavelengths, and then built an acoustic platform for model validation. Additionally, sensitivity analyses for influential factors on thermoacoustic efficiency were performed, including the thickness of multilayered graphene films, the thermal effusivity of substrates, and the characteristics of inserted gases. The higher sensitivity coefficients result in the stronger effects on thermoacoustic efficiency. We find that the thickness (5 nm-15 nm) of graphene films plays a trivial role in efficiency, resulting in the sensitivity coefficient less than 0.02. The substrate thermal effusivity, however, has significant effects on efficiency, with the sensitivity coefficient around 1.7. Moreover, substrates with a lower thermal effusivity show better acoustic performances. For influences of ambient gases, the sensitivity coefficients of density ρg, thermal conductivity κg, and specific heat cp,g are 2.7, 0.98, and 0.8, respectively. Furthermore, large magnitudes of both ρg and κg lead to a higher efficiency and the sound pressure level generated by graphene films is approximately proportional to the inverse of cp,g. These findings can refer to the optimal design for graphene thermoacoustic speakers.

  18. Modified computation of the nozzle damping coefficient in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Liu, Peijin; Wang, Muxin; Yang, Wenjing; Gupta, Vikrant; Guan, Yu; Li, Larry K. B.

    2018-02-01

    In solid rocket motors, the bulk advection of acoustic energy out of the nozzle constitutes a significant source of damping and can thus influence the thermoacoustic stability of the system. In this paper, we propose and test a modified version of a historically accepted method of calculating the nozzle damping coefficient. Building on previous work, we separate the nozzle from the combustor, but compute the acoustic admittance at the nozzle entry using the linearized Euler equations (LEEs) rather than with short nozzle theory. We compute the combustor's acoustic modes also with the LEEs, taking the nozzle admittance as the boundary condition at the combustor exit while accounting for the mean flow field in the combustor using an analytical solution to Taylor-Culick flow. We then compute the nozzle damping coefficient via a balance of the unsteady energy flux through the nozzle. Compared with established methods, the proposed method offers competitive accuracy at reduced computational costs, helping to improve predictions of thermoacoustic instability in solid rocket motors.

  19. How to Test the SME with Space Missions?

    NASA Technical Reports Server (NTRS)

    Hees, A.; Lamine, B.; Le Poncin-Lafitte, C.; Wolf, P.

    2013-01-01

    In this communication, we focus on possibilities to constrain SME coefficients using Cassini and Messenger data. We present simulations of radio science observables within the framework of the SME, identify the linear combinations of SME coefficients the observations depend on and determine the sensitivity of these measurements to the SME coefficients. We show that these datasets are very powerful for constraining SME coefficients.

  20. The NASA Langley laminar-flow-control experiment on a swept, supercritical airfoil: Suction coefficient analysis

    NASA Technical Reports Server (NTRS)

    Brooks, Cuyler W., Jr.; Harris, Charles D.; Harvey, William D.

    1991-01-01

    A swept supercritical wing incorporating laminar flow control at transonic flow conditions was designed and tested. The definition of an experimental suction coefficient and a derivation of the compressible and incompressible formulas for the computation of the coefficient from measurable quantities is presented. The suction flow coefficient in the highest velocity nozzles is shown to be overpredicted by as much as 12 percent through the use of an incompressible formula. However, the overprediction on the computed value of suction drag when some of the suction nozzles were operating in the compressible flow regime is evaluated and found to be at most 6 percent at design conditions.

  1. Design and experimental verification of a photoacoustic flow sensor using computational fluid dynamics.

    PubMed

    Lassen, Mikael; Balslev-Harder, David; Brusch, Anders; Pelevic, Nikola; Persijn, Stefan; Petersen, Jan C

    2018-02-01

    A photoacoustic (PA) sensor for fast and real-time gas sensing is demonstrated. The PA sensor is a stand-alone system controlled by a field-programmable gate array. The PA cell has been designed for flow noise immunity using computational fluid dynamics (CFD) analysis. The aim of the CFD analysis was to investigate and minimize the influence of the gas distribution and flow noise on the PA signal. PA measurements were conducted at different flow rates by exciting molecular C-H stretch vibrational bands of hexane (C 6 H 14 ) and decane (C 10 H 22 ) molecules in clean air at 2950  cm -1 (3.38 μm) with a custom-made mid-infrared interband cascade laser. We observe a (1σ, standard deviation) sensitivity of 0.4±0.1  ppb (nmol/mol) for hexane in clean air at flow rates up to 1.7 L/min, corresponding to a normalized noise equivalent absorption coefficient of 2.5×10 -9   W cm -1   Hz -1/2 , demonstrating high sensitivity and fast real-time gas analysis. An Allan deviation analysis for decane shows that the detection limit at optimum integration time is 0.25 ppbV (nmol/mol).

  2. Prostate multimodality image registration based on B-splines and quadrature local energy.

    PubMed

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  3. Assessment of CFD Estimation of Aerodynamic Characteristics of Basic Reusable Rocket Configurations

    NASA Astrophysics Data System (ADS)

    Fujimoto, Keiichiro; Fujii, Kozo

    Flow-fields around the basic SSTO-rocket configurations are numerically simulated by the Reynolds-averaged Navier-Stokes (RANS) computations. Simulations of the Apollo-like configuration is first carried out, where the results are compared with NASA experiments and the prediction ability of the RANS simulation is discussed. The angle of attack of the freestream ranges from 0° to 180° and the freestream Mach number ranges from 0.7 to 2.0. Computed aerodynamic coefficients for the Apollo-like configuration agree well with the experiments under a wide range of flow conditions. The flow simulations around the slender Apollo-type configuration are carried out next and the results are compared with the experiments. Computed aerodynamic coefficients also agree well with the experiments. Flow-fields are dominated by the three-dimensional massively separated flow, which should be captured for accurate aerodynamic prediction. Grid refinement effects on the computed aerodynamic coefficients are investigated comprehensively.

  4. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  5. Sum over Histories Representation for Kinetic Sensitivity Analysis: How Chemical Pathways Change When Reaction Rate Coefficients Are Varied

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Shirong; Davis, Michael J.; Skodje, Rex T.

    2015-11-12

    The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less

  6. Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming

    2017-07-01

    The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.

  7. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    USGS Publications Warehouse

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  8. Symbolic computation of recurrence equations for the Chebyshev series solution of linear ODE's. [ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Geddes, K. O.

    1977-01-01

    If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.

  9. Response of selected binomial coefficients to varying degrees of matrix sparseness and to matrices with known data interrelationships

    USGS Publications Warehouse

    Archer, A.W.; Maples, C.G.

    1989-01-01

    Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.

  10. A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows

    USGS Publications Warehouse

    Uncles, R.J.; Peterson, D.H.

    1995-01-01

    A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.

  11. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  12. Constant-head pumping test of a multiaquifer well to determine characteristics of individual aquifers

    USGS Publications Warehouse

    Bennett, Gordon D.; Patten, E.P.

    1962-01-01

    This report describes the theory and field procedures for determining the transmissibility and storage coefficients and the original hydrostatic head of each aquifer penetrated by a multiaquifer well. The procedure involves pumping the well in such a manner that the drawdown of water level is constant while the discharges of the different aquifers are measured by means of borehole flowmeters. The theory is developed by analogy to the heat-flow problem solved by Smith. The internal discharge between aquifers after the well is completed is analyzed as the first step. Pumping at constant, drawdown constitutes the second step. Transmissibility and storage coefficients are determined by a method described by Jacob and Lohman, after the original internal discharge to or from the aquifer has been compensated for in the calculations. The original hydrostatic head of each aquifer is then determined by resubstituting the transmissibility and storage coefficients into the first step of the analysis. The method was tested on a well in Chester County, Pa., but the results were not entirely satisfactory, owing to the lack of sufficiently accurate methods of flow measurement and, probably, to the effects of entrance losses in the well. The determinations of the transmissibility coefficient and static head can be accepted as having order-of-magnitude significance, but the determinations of the storage coefficient, which is highly sensitive to experimental error, must be rejected. It is felt that better results may be achieved in the future, as more reliable devices for metering the flow become available and as more is learned concerning the nature of entrance losses. If accurate data can be obtained, recently developed techniques of digital or analog computation may permit determination of the response of each aquifer in the well to any form of pumping.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo

    High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less

  14. Turbulent kinetics of a large wind farm and their impact in the neutral boundary layer

    DOE PAGES

    Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo; ...

    2015-12-28

    High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less

  15. A Large Sample Procedure for Testing Coefficients of Ordinal Association: Goodman and Kruskal's Gamma and Somers' d ba and d ab

    ERIC Educational Resources Information Center

    Berry, Kenneth J.; And Others

    1977-01-01

    A FORTRAN program, GAMMA, computes Goodman and Kruskal's coefficient of ordinal association, gamma, and Somer's coefficient. The program also provides associated standard errors, standard scores, and probability values. (Author/JKS)

  16. Concordance of chart and billing data with direct observation in dental practice.

    PubMed

    Demko, Catherine A; Victoroff, Kristin Zakariasen; Wotman, Stephen

    2008-10-01

    The commonly used methods of chart review, billing data summaries and practitioner self-reporting have not been examined for their ability to validly and reliably represent time use and service delivery in routine dental practice. A more thorough investigation of these data sources would provide insight into the appropriateness of each approach for measuring various clinical behaviors. The aim of this study was to assess the validity of commonly used methods such as dental chart review, billing data, or practitioner self-report compared with a 'gold standard' of information derived from direct observation of routine dental visits. A team of trained dental hygienists directly observed 3751 patient visits in 120 dental practices and recorded the behaviors and procedures performed by dentists and hygienists during patient contact time. Following each visit, charts and billing records were reviewed for the performed and billed procedures. Dental providers characterized their frequency of preventive service delivery through self-administered surveys. We standardized the observation and abstraction methods to obtain optimal measures from each of the multiple data sources. Multi-rater kappa coefficients were computed to monitor standardization, while sensitivity, specificity, and kappa coefficients were calculated to compare the various data sources with direct observation. Chart audits were more sensitive than billing data for all observed procedures and demonstrated higher agreement with directly observed data. Chart and billing records were not sensitive for several prevention-related tasks (oral cancer screening and oral hygiene instruction). Provider self-reports of preventive behaviors were always over-estimated compared with direct observation. Inter-method reliability kappa coefficients for 13 procedures ranged from 0.197 to 0.952. These concordance findings suggest that strengths and weaknesses of data collection sources should be considered when investigating delivery of dental services especially when using practitioner survey data. Future investigations can more fully rely on charted information rather than billing data and provider self-report for most dental procedures, but nonbillable procedures and most counseling interactions will not be captured with routine charting and billing practices.

  17. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE PAGES

    Hiller, Mauritius; Dewji, Shaheen Azim

    2017-02-16

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  18. The influence of the atmosphere on geoid and potential coefficient determinations from gravity data

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Rapp, R. H.

    1976-01-01

    For the precise computation of geoid undulations the effect of the attraction of the atmosphere on the solution of the basic boundary value problem of gravimetric geodesy must be considered. This paper extends the theory of Moritz for deriving an atmospheric correction to the case when the undulations are computed by combining anomalies in a cap surrounding the computation point with information derived from potential coefficients. The correction term is a function of the cap size and the topography within the cap. It reaches a value of 3.0 m for a cap size of 30 deg, variations on the decimeter level being caused by variations in the topography. The effect of the atmospheric correction terms on potential coefficients is found to be small, reaching a maximum of 0.0055 millionths at n = 2, m = 2 when terrestrial gravity data are considered. The magnitude of this correction indicates that in future potential coefficient determination from gravity data the atmospheric correction should be made to such data.

  19. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  20. Computational analysis of unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Abudarag, Sakhr; Yagoub, Rashid; Elfatih, Hassan; Filipovic, Zoran

    2017-01-01

    A computational analysis has been performed to verify the aerodynamics properties of Unmanned Aerial Vehicle (UAV). The UAV-SUST has been designed and fabricated at the Department of Aeronautical Engineering at Sudan University of Science and Technology in order to meet the specifications required for surveillance and reconnaissance mission. It is classified as a medium range and medium endurance UAV. A commercial CFD solver is used to simulate steady and unsteady aerodynamics characteristics of the entire UAV. In addition to Lift Coefficient (CL), Drag Coefficient (CD), Pitching Moment Coefficient (CM) and Yawing Moment Coefficient (CN), the pressure and velocity contours are illustrated. The aerodynamics parameters are represented a very good agreement with the design consideration at angle of attack ranging from zero to 26 degrees. Moreover, the visualization of the velocity field and static pressure contours is indicated a satisfactory agreement with the proposed design. The turbulence is predicted by enhancing K-ω SST turbulence model within the computational fluid dynamics code.

  1. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius; Dewji, Shaheen Azim

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  2. Random walk numerical simulation for hopping transport at finite carrier concentrations: diffusion coefficient and transport energy concept.

    PubMed

    Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan

    2009-11-28

    The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.

  3. Application of design sensitivity analysis for greater improvement on machine structural dynamics

    NASA Technical Reports Server (NTRS)

    Yoshimura, Masataka

    1987-01-01

    Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.

  4. Parameter estimation supplement to the Mission Analysis Evaluation and Space Trajectory Operations program (MAESTRO)

    NASA Technical Reports Server (NTRS)

    Bjorkman, W. S.; Uphoff, C. W.

    1973-01-01

    This Parameter Estimation Supplement describes the PEST computer program and gives instructions for its use in determination of lunar gravitation field coefficients. PEST was developed for use in the RAE-B lunar orbiting mission as a means of lunar field recovery. The observations processed by PEST are short-arc osculating orbital elements. These observations are the end product of an orbit determination process obtained with another program. PEST's end product it a set of harmonic coefficients to be used in long-term prediction of the lunar orbit. PEST employs some novel techniques in its estimation process, notably a square batch estimator and linear variational equations in the orbital elements (both osculating and mean) for measurement sensitivities. The program's capabilities are described, and operating instructions and input/output examples are given. PEST utilizes MAESTRO routines for its trajectory propagation. PEST's program structure and subroutines which are not common to MAESTRO are described. Some of the theoretical background information for the estimation process, and a derivation of linear variational equations for the Method 7 elements are included.

  5. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  6. Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.

    PubMed

    Mini, M G; Devassia, V P; Thomas, Tessamma

    2004-12-01

    Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.

  7. Turbulence Model Sensitivity and Scour Gap Effect of Unsteady Flow around Pipe: A CFD Study

    PubMed Central

    Ali, Abbod; Sharma, R. K.; Ganesan, P.

    2014-01-01

    A numerical investigation of incompressible and transient flow around circular pipe has been carried out at different five gap phases. Flow equations such as Navier-Stokes and continuity equations have been solved using finite volume method. Unsteady horizontal velocity and kinetic energy square root profiles are plotted using different turbulence models and their sensitivity is checked against published experimental results. Flow parameters such as horizontal velocity under pipe, pressure coefficient, wall shear stress, drag coefficient, and lift coefficient are studied and presented graphically to investigate the flow behavior around an immovable pipe and scoured bed. PMID:25136666

  8. Coupling coefficients for tensor product representations of quantum SU(2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groenevelt, Wolter, E-mail: w.g.m.groenevelt@tudelft.nl

    2014-10-15

    We study tensor products of infinite dimensional irreducible {sup *}-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometricmore » orthogonal polynomials and q-Bessel-type functions.« less

  9. Comparison of surrogate indices for insulin sensitivity with parameters of the intravenous glucose tolerance test in early lactation dairy cattle.

    PubMed

    Alves-Nores, V; Castillo, C; Hernandez, J; Abuelo, A

    2017-10-01

    The aim of this study was to investigate the correlation between different surrogate indices and parameters of the intravenous glucose tolerance test (IVGTT) in dairy cows at the start of their lactation. Ten dairy cows underwent IVGTT on Days 3 to 7 after calving. Areas under the curve during the 90 min after infusion, peak and nadir concentrations, elimination rates, and times to reach half-maximal and basal concentrations for glucose, insulin, nonesterified fatty acids, and β-hydroxybutyrate were calculated. Surrogate indices were computed using the average of the IVGTT basal samples, and their correlation with the IVGTT parameters studied through the Spearman's rank test. No statistically significant or strong correlation coefficients (P > 0.05; |ρ| < 0.50) were observed between the insulin sensitivity measures derived from the IVGTT and any of the surrogate indices. Therefore, these results support that the assessment of insulin sensitivity in early lactation cattle cannot rely on the calculation of surrogate indices in just a blood sample, and the more laborious tests (ie, hyperinsulinemic euglycemic clamp test or IVGTT) should be employed to predict the sensitivity of the peripheral tissues to insulin accurately. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. New limb-darkening coefficients for modeling binary star light curves

    NASA Technical Reports Server (NTRS)

    Van Hamme, W.

    1993-01-01

    We present monochromatic, passband-specific, and bolometric limb-darkening coefficients for a linear as well as nonlinear logarithmic and square root limb-darkening laws. These coefficients, including the bolometric ones, are needed when modeling binary star light curves with the latest version of the Wilson-Devinney light curve progam. We base our calculations on the most recent ATLAS stellar atmosphere models for solar chemical composition stars with a wide range of effective temperatures and surface gravitites. We examine how well various limb-darkening approximations represent the variation of the emerging specific intensity across a stellar surface as computed according to the model. For binary star light curve modeling purposes, we propose the use of a logarithmic or a square root law. We design our tables in such a manner that the relative quality of either law with respect to another can be easily compared. Since the computation of bolometric limb-darkening coefficients first requires monochromatic coefficients, we also offer tables of these coefficients (at 1221 wavelength values between 9.09 nm and 160 micrometer) and tables of passband-specific coefficients for commonly used photometric filters.

  11. Experimental study on the sensitive depth of backwards detected light in turbid media.

    PubMed

    Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping

    2018-05-28

    In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.

  12. Computation of wind tunnel wall effects for complex models using a low-order panel method

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Harris, Scott H.

    1994-01-01

    A technique for determining wind tunnel wall effects for complex models using the low-order, three dimensional panel method PMARC (Panel Method Ames Research Center) has been developed. Initial validation of the technique was performed using lift-coefficient data in the linear lift range from tests of a large-scale STOVL fighter model in the National Full-Scale Aerodynamics Complex (NFAC) facility. The data from these tests served as an ideal database for validating the technique because the same model was tested in two wind tunnel test sections with widely different dimensions. The lift-coefficient data obtained for the same model configuration in the two test sections were different, indicating a significant influence of the presence of the tunnel walls and mounting hardware on the lift coefficient in at least one of the two test sections. The wind tunnel wall effects were computed using PMARC and then subtracted from the measured data to yield corrected lift-coefficient versus angle-of-attack curves. The corrected lift-coefficient curves from the two wind tunnel test sections matched very well. Detailed pressure distributions computed by PMARC on the wing lower surface helped identify the source of large strut interference effects in one of the wind tunnel test sections. Extension of the technique to analysis of wind tunnel wall effects on the lift coefficient in the nonlinear lift range and on drag coefficient will require the addition of boundary-layer and separated-flow models to PMARC.

  13. Sensitivity analysis of water consumption in an office building

    NASA Astrophysics Data System (ADS)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  14. Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.

    PubMed

    Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P

    2018-06-01

    We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.

  15. Efficient FFT Algorithm for Psychoacoustic Model of the MPEG-4 AAC

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Seong; Lee, Chang-Joon; Park, Young-Cheol; Youn, Dae-Hee

    This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.

  16. Consequences of using nonlinear particle trajectories to compute spatial diffusion coefficients. [for cosmic ray propagation in interstellar and interplanetary space

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.

    1977-01-01

    In a study of cosmic ray propagation in interstellar and interplanetary space, a perturbed orbit resonant scattering theory for pitch angle diffusion in a slab model of magnetostatic turbulence is slightly generalized and used to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field. This diffusion coefficient has been useful for describing the solar modulation of the galactic cosmic rays, and for explaining the diffusive phase in solar flares in which the initial anisotropy of the particle distribution decays to isotropy.

  17. Solid harmonic wavelet scattering for predictions of molecule properties

    NASA Astrophysics Data System (ADS)

    Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis

    2018-06-01

    We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.

  18. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.

  19. Euler technology assessment for preliminary aircraft design employing OVERFLOW code with multiblock structured-grid method

    NASA Technical Reports Server (NTRS)

    Treiber, David A.; Muilenburg, Dennis A.

    1995-01-01

    The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.

  20. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  1. Use of high-speed cinematography and computer generated gait diagrams for the study of equine hindlimb kinematics.

    PubMed

    Kobluk, C N; Schnurr, D; Horney, F D; Sumner-Smith, G; Willoughby, R A; Dekleer, V; Hearn, T C

    1989-01-01

    High-speed cinematography with computer aided analysis was used to study equine hindlimb kinematics. Eight horses were filmed at the trot or the pace. Filming was done from the side (lateral) and the back (caudal). Parameters measured from the lateral filming included the heights of the tuber coxae and tailhead, protraction and retraction of the hoof and angular changes of the tarsus and stifle. Abduction and adduction of the limb and tarsal height changes were measured from the caudal filming. The maximum and minimum values plus the standard deviations and coefficients of variations are presented in tabular form. Three gait diagrams were constructed to represent stifle angle versus tarsal angle, metatarsophalangeal height versus protraction-retraction (fetlock height diagram) and tuber coxae and tailhead height versus stride (pelvic height diagram). Application of the technique to the group of horses revealed good repeatability of the gait diagrams within a limb and the diagrams appeared to be sensitive indicators of left/right asymmetries.

  2. Fluid Analysis and Improved Structure of an ATEG Heat Exchanger Based on Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Tang, Z. B.; Deng, Y. D.; Su, C. Q.; Yuan, X. H.

    2015-06-01

    In this study, a numerical model has been employed to analyze the internal flow field distribution in a heat exchanger applied for an automotive thermoelectric generator based on computational fluid dynamics. The model simulates the influence of factors relevant to the heat exchanger, including the automotive waste heat mass flow velocity, temperature, internal fins, and back pressure. The result is in good agreement with experimental test data. Sensitivity analysis of the inlet parameters shows that increase of the exhaust velocity, compared with the inlet temperature, makes little contribution (0.1 versus 0.19) to the heat transfer but results in a detrimental back pressure increase (0.69 versus 0.21). A configuration equipped with internal fins is proved to offer better thermal performance compared with that without fins. Finally, based on an attempt to improve the internal flow field, a more rational structure is obtained, offering a more homogeneous temperature distribution, higher average heat transfer coefficient, and lower back pressure.

  3. Theoretical Study of Monolayer and Double-Layer Waveguide Love Wave Sensors for Achieving High Sensitivity.

    PubMed

    Li, Shuangming; Wan, Ying; Fan, Chunhai; Su, Yan

    2017-03-22

    Love wave sensors have been widely used for sensing applications. In this work, we introduce the theoretical analysis of the monolayer and double-layer waveguide Love wave sensors. The velocity, particle displacement and energy distribution of Love waves were analyzed. Using the variations of the energy repartition, the sensitivity coefficients of Love wave sensors were calculated. To achieve a higher sensitivity coefficient, a thin gold layer was added as the second waveguide on top of the silicon dioxide (SiO₂) waveguide-based, 36 degree-rotated, Y-cut, X-propagating lithium tantalate (36° YX LiTaO₃) Love wave sensor. The Love wave velocity was significantly reduced by the added gold layer, and the flow of wave energy into the waveguide layer from the substrate was enhanced. By using the double-layer structure, almost a 72-fold enhancement in the sensitivity coefficient was achieved compared to the monolayer structure. Additionally, the thickness of the SiO₂ layer was also reduced with the application of the gold layer, resulting in easier device fabrication. This study allows for the possibility of designing and realizing robust Love wave sensors with high sensitivity and a low limit of detection.

  4. Validation of the French version of the Child Post-Traumatic Stress Reaction Index: psychometric properties in French speaking school-aged children.

    PubMed

    Olliac, Bertrand; Birmes, Philippe; Bui, Eric; Allenou, Charlotte; Brunet, Alain; Claudet, Isabelle; Sales de Gauzy, Jérôme; Grandjean, Hélène; Raynaud, Jean-Philippe

    2014-01-01

    Although the reliable and valid Child Post-Traumatic Stress Reaction Index (CPTS-RI) is a widely used measure of posttraumatic stress disorder (PTSD) symptoms in children, it has not been validated in French-speaking populations. The present study aims to assess the psychometric properties of the CPTS-RI in three samples of French-speaking school-children. Data was obtained from three samples. Sample 1 was composed of 106 children (mean (SD) age = 11.7(0.7), 50% females) victims of an industrial disaster. Sample 2 was composed of 50 children (mean (SD) age = 10.8(2.6), 44% females) who had received an orthopaedic surgical procedure after an accident. Sample 3 was composed of 106 children (mean (SD) age = 11.7(2.2), 44% females) admitted to an emergency department after a road traffic accident. We tested internal consistency using Cronbach's alpha. We examined test-retest reliability using intraclass correlation coefficient. In order to assess the convergent validity of the French version of the CPTS-RI and the Clinician Administered PTS Scale-Child and Adolescent (CAPS-CA), spearman-correlation coefficient was computed. To verify the validity of the cut-off scores, a ROC curve was constructed which evaluated the sensitivity and specificity of each score compared to the diagnosis with the CAPS-CA. We also used principal components analysis with varimax rotation to study the structure of the French version of the CPTS-RI. Cronbach's alpha coefficient was 0.87 for the French version of the CPTS-RI. Two-week test-retest intraclass correlation coefficient (n = 30) was 0.67. The French version of the CPTS-RI was well correlated with the CAPS-CA (r = 0.76, p < 0.001). Taking the CAPS-CA as the diagnostic reference, with a diagnostic cut-off of >24 for the CPTS-RI, the sensitivity and specificities were 100% and 62.6%, respectively. The French version of the CPTS-RI demonstrated a three-factor structure. The CPTS-RI is reliable and valid in French-speaking children.

  5. [Application of the Children's Impact of Event Scale (Chinese Version) on a rapid assessment of posttraumatic stress disorder among children from the Wenchuan earthquake area].

    PubMed

    Zhao, Gao-feng; Zhang, Qiang; Pang, Yan; Ren, Zheng-jia; Peng, Dan; Jiang, Guo-guo; Liu, Shan-ming; Chen, Ying; Geng, Ting; Zhang, Shu-sen; Yang, Yan-chun; Deng, Hong

    2009-11-01

    To explore the reliability and validity of the Children's Impact of Event Scale (Chinese version, CRIES-13) and to determine the value and the optimal cutoff point of the score of CRIES-13 in screening posttraumatic stress disorder (PTSD), so as to provide evidence for PTSD prevention and identify children at risk in Wenchuan earthquake areas. A total of 253 children experienced the Wenchuan earthquake were tested through Stratified random cluster sampling. The authors examined CRIES-13's internal consistency, discriminative validity and predictive value of the cut-off. PTSD was assessed with the DSM-IV criteria. Area under the curve while sensitivity, specificity and Youden index were computed based on the receiver operating characteristic curve analysis. Optimal cutoff point was determined by the maximum of Youden index. 20.9% of the subjects were found to have met the DSM-IV criteria for PTSD 7 months after the Wenchuan earthquake accident. The Cronbach's coefficient of CRIES-13 was 0.903 and the mean inter-item correlation coefficients ranged from 0.283 to 0.689, the correlation coefficient of the three factors with the total scale scores ranged from 0.836 to 0.868 while the correlation coefficient among the three factors ranged from 0.568 to 0.718, PTSD cases indicated much higher scores than non-PTSD cases, the Youden index reached maximum value when the total score approached 18 in CRIES-13 with sensitivity and specificity as 81.1% and 76.5% respectively. Consistency check showed that there were no significant differences between the results of CRIES-13 score >/= 32 and clinical diagnosis (Kappa = 0.529) from the screening program. CRIES-13 appeared to be a reliable and valid measure for assessing the posttraumatic stress symptoms among children after the earthquake accident in the Wenchuan area. The CRIES-13 seemed to be a useful self-rating diagnostic instrument for survivors with PTSD symptoms as a clinical concern by using a 18 cut-off in total score. Consistency check showed that there was no significant difference between the screening result of CRIES-13 score >/= 32 and clinical diagnosis.

  6. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  7. Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model

    DOE PAGES

    Barnard, Richard C.; Frank, Martin; Krycki, Kai

    2016-02-09

    In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less

  8. Prediction of pressure drop in fluid tuned mounts using analytical and computational techniques

    NASA Technical Reports Server (NTRS)

    Lasher, William C.; Khalilollahi, Amir; Mischler, John; Uhric, Tom

    1993-01-01

    A simplified model for predicting pressure drop in fluid tuned isolator mounts was developed. The model is based on an exact solution to the Navier-Stokes equations and was made more general through the use of empirical coefficients. The values of these coefficients were determined by numerical simulation of the flow using the commercial computational fluid dynamics (CFD) package FIDAP.

  9. A Hybrid Approach for CpG Island Detection in the Human Genome.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Da; Chiang, Yi-Cheng; Chuang, Li-Yeh

    2016-01-01

    CpG islands have been demonstrated to influence local chromatin structures and simplify the regulation of gene activity. However, the accurate and rapid determination of CpG islands for whole DNA sequences remains experimentally and computationally challenging. A novel procedure is proposed to detect CpG islands by combining clustering technology with the sliding-window method (PSO-based). Clustering technology is used to detect the locations of all possible CpG islands and process the data, thus effectively obviating the need for the extensive and unnecessary processing of DNA fragments, and thus improving the efficiency of sliding-window based particle swarm optimization (PSO) search. This proposed approach, named ClusterPSO, provides versatile and highly-sensitive detection of CpG islands in the human genome. In addition, the detection efficiency of ClusterPSO is compared with eight CpG island detection methods in the human genome. Comparison of the detection efficiency for the CpG islands in human genome, including sensitivity, specificity, accuracy, performance coefficient (PC), and correlation coefficient (CC), ClusterPSO revealed superior detection ability among all of the test methods. Moreover, the combination of clustering technology and PSO method can successfully overcome their respective drawbacks while maintaining their advantages. Thus, clustering technology could be hybridized with the optimization algorithm method to optimize CpG island detection. The prediction accuracy of ClusterPSO was quite high, indicating the combination of CpGcluster and PSO has several advantages over CpGcluster and PSO alone. In addition, ClusterPSO significantly reduced implementation time.

  10. A Bayesian approach to modelling the impact of hydrodynamic shear stress on biofilm deformation

    PubMed Central

    Wilkinson, Darren J.; Jayathilake, Pahala Gedara; Rushton, Steve P.; Bridgens, Ben; Li, Bowen; Zuliani, Paolo

    2018-01-01

    We investigate the feasibility of using a surrogate-based method to emulate the deformation and detachment behaviour of a biofilm in response to hydrodynamic shear stress. The influence of shear force, growth rate and viscoelastic parameters on the patterns of growth, structure and resulting shape of microbial biofilms was examined. We develop a statistical modelling approach to this problem, using combination of Bayesian Poisson regression and dynamic linear models for the emulation. We observe that the hydrodynamic shear force affects biofilm deformation in line with some literature. Sensitivity results also showed that the expected number of shear events, shear flow, yield coefficient for heterotrophic bacteria and extracellular polymeric substance (EPS) stiffness per unit EPS mass are the four principal mechanisms governing the bacteria detachment in this study. The sensitivity of the model parameters is temporally dynamic, emphasising the significance of conducting the sensitivity analysis across multiple time points. The surrogate models are shown to perform well, and produced ≈ 480 fold increase in computational efficiency. We conclude that a surrogate-based approach is effective, and resulting biofilm structure is determined primarily by a balance between bacteria growth, viscoelastic parameters and applied shear stress. PMID:29649240

  11. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  12. A scalable and deformable stylized model of the adult human eye for radiation dose assessment

    NASA Astrophysics Data System (ADS)

    El Basha, Daniel; Furuta, Takuya; Iyer, Siva S. R.; Bolch, Wesley E.

    2018-05-01

    With recent changes in the recommended annual limit on eye lens exposures to ionizing radiation, there is considerable interest in predictive computational dosimetry models of the human eye and its various ocular structures including the crystalline lens, ciliary body, cornea, retina, optic nerve, and central retinal artery. Computational eye models to date have been constructed as stylized models, high-resolution voxel models, and polygon mesh models. Their common feature, however, is that they are typically constructed of nominal size and of a roughly spherical shape associated with the emmetropic eye. In this study, we present a geometric eye model that is both scalable (allowing for changes in eye size) and deformable (allowing for changes in eye shape), and that is suitable for use in radiation transport studies of ocular exposures and radiation treatments of eye disease. The model allows continuous and variable changes in eye size (axial lengths from 20 to 26 mm) and eye shape (diopters from  ‑12 to  +6). As an explanatory example of its use, five models (emmetropic eyes of small, average, and large size, as well as average size eyes of  ‑12D and  +6D) were constructed and subjected to normally incident beams of monoenergetic electrons and photons, with resultant energy-dependent dose coefficients presented for both anterior and posterior eye structures. Electron dose coefficients were found to vary with changes to both eye size and shape for the posterior eye structures, while their values for the crystalline lens were found to be sensitive to changes in only eye size. No dependence upon eye size or eye shape was found for photon dose coefficients at energies below 2 MeV. Future applications of the model can include more extensive tabulations of dose coefficients to all ocular structures (not only the lens) as a function of eye size and shape, as well as the assessment of x-ray therapies for ocular disease for patients with non-emmetropic eyes.

  13. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  14. A computable phenotype for asthma case identification in adult and pediatric patients: External validation in the Chicago Area Patient-Outcomes Research Network (CAPriCORN).

    PubMed

    Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D

    2017-10-13

    Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.

  15. ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.

    PubMed

    Kim, Seongho

    2015-11-01

    Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.

  16. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization study is carried out using a geometric mean approach. Following this, sensitivity analyses with the aid of variance-based non-parametric approach and partial correlation coefficients are conducted using data available from surrogate models of the objectives and the multi-objective optima to identify the contribution of the design variables to the objective variability and to analyze the variability of the design variables and the objectives. In summary the present dissertation offers insight into an improved coarse to fine grid extrapolation technique for Navier-Stokes computations and also suggests tools for a designer to conduct design optimization study and related sensitivity analyses for a given design problem.

  17. Improved differential Ka band dielectrometer based on the wave propagation in a quartz cylinder surrounded by high loss liquid under test

    NASA Astrophysics Data System (ADS)

    Skresanov, Valery N.; Eremenko, Zoya E.; Glamazdin, Vladimir V.; Shubnyi, Alexander I.

    2011-06-01

    The differential dielectrometer was designed to measure small differences in complex permittivity (CP) of two high loss liquids at frequency 32.82 GHz. The measurements are fully computer-aided with the exception of liquids filling and draining in the measurement cells. The time of one measurement cycle does not exceed 3 min. The dielectrometer is easy-to-work and can be used under the conditions of scientific and industrial physical-chemical laboratories. The sensitivity of the difference in the phase coefficients of the electromagnetic waves propagated in the measurement cells is better than 0.05% and that of the attenuation coefficient is of the order of 0.2%. The dielectrometer contains two measurement cells that are dielectric quartz cylinders surrounded by high loss liquids. We developed the CP calculation algorithm using the known CP of the reference liquid and the difference coefficients of complex wave propagation in the cells. The origins of the measurement errors are studied in detail and recommendations were made to avoid some of them. The dielectrometer can be used to express the identification of wine and must authenticity by means of their CP values. The CP measurement results for solutions of some substances that make wine and must composition are obtained. The possibility of using the dielectrometer for the detection of added water in wines or musts is shown.

  18. Device and method for measuring the coefficient of performance of a heat pump

    DOEpatents

    Brantley, V.R.; Miller, D.R.

    1982-05-18

    A method and instrument is provided which allows quick and accurate measurement of the coefficient of performance of an installed electrically powered heat pump including auxiliary resistane heaters. Temperature-sensitive resistors are placed in the return and supply air ducts to measure the temperature increase of the air across the refrigerant and resistive-heating elements of the system. The voltages across the resistors which are directly proportional to the respective duct tempertures are applied to the inputs of a differential amplifier so that its output voltage is proportional to the temperature difference across the unit. A voltage-to-frequency converter connected to the output of the differential amplifier converts the voltage signal to a proportional-frequency signal. A digital watt meter is used to measure the power to the unit and produces a signal having a frequency proportional to the input power. A digital logic circuit ratios the temperature difference signal and the electric power input signal in a unique manner to produce a single number which is the coefficient of performance of the unit over the test interval. The digital logic and an in-situ calibration procedure enables the instrument to make these measurements in such a way that the ratio of heat flow/power input is obtained without computations. No specialized knowledge of thermodynamics or electrons is required to operate the instrument.

  19. Device and method for measuring the coefficient of performance of a heat pump

    DOEpatents

    Brantley, Vanston R.; Miller, Donald R.

    1984-01-01

    A method and instrument is provided which allows quick and accurate measurement of the coefficient of performance of an installed electrically powered heat pump including auxiliary resistance heaters. Temperature sensitive resistors are placed in the return and supply air ducts to measure the temperature increase of the air across the refrigerant and resistive heating elements of the system. The voltages across the resistors which are directly proportional to the respective duct temperatures are applied to the inputs of a differential amplifier so that its output voltage is proportional to the temperature difference across the unit. A voltage-to-frequency converter connected to the output of the differential amplifier converts the voltage signal to a proportional frequency signal. A digital watt meter is used to measure the power to the unit and produces a signal having a frequency proportional to the input power. A digital logic circuit ratios the temperature difference signal and the electric power input signal in a unique manner to produce a single number which is the coefficient of performance of the unit over the test interval. The digital logic and an in-situ calibration procedure enables the instrument to make these measurements in such a way that the ratio of heat flow/power input is obtained without computations. No specialized knowledge of thermodynamics or electronics is required to operate the instrument.

  20. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  1. Validity and sensitivity of the longitudinal asymmetry index to detect gait asymmetry using Microsoft Kinect data.

    PubMed

    Auvinet, E; Multon, F; Manning, V; Meunier, J; Cobb, J P

    2017-01-01

    Gait asymmetry information is a key point in disease screening and follow-up. Constant Relative Phase (CRP) has been used to quantify within-stride asymmetry index, which requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. This study explores a new index, the Longitudinal Asymmetry Index (ILong) which is derived using data from a low-cost depth camera (Kinect). ILong is based on depth images averaged over several gait cycles, rather than derived joint positions or angles. This study aims to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. The gait of 15 subjects was recorded concurrently with a marker-based system (MBS) and Kinect, and asymmetry was artificially reproduced by introducing a 5cm sole attached to one foot. CRP computed with Kinect was not reliable. ILong detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring. Copyright © 2016. Published by Elsevier B.V.

  2. SU-F-I-45: An Automated Technique to Measure Image Contrast in Clinical CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J; Abadi, E; Meng, B

    Purpose: To develop and validate an automated technique for measuring image contrast in chest computed tomography (CT) exams. Methods: An automated computer algorithm was developed to measure the distribution of Hounsfield units (HUs) inside four major organs: the lungs, liver, aorta, and bones. These organs were first segmented or identified using computer vision and image processing techniques. Regions of interest (ROIs) were automatically placed inside the lungs, liver, and aorta and histograms of the HUs inside the ROIs were constructed. The mean and standard deviation of each histogram were computed for each CT dataset. Comparison of the mean and standardmore » deviation of the HUs in the different organs provides different contrast values. The ROI for the bones is simply the segmentation mask of the bones. Since the histogram for bones does not follow a Gaussian distribution, the 25th and 75th percentile were computed instead of the mean. The sensitivity and accuracy of the algorithm was investigated by comparing the automated measurements with manual measurements. Fifteen contrast enhanced and fifteen non-contrast enhanced chest CT clinical datasets were examined in the validation procedure. Results: The algorithm successfully measured the histograms of the four organs in both contrast and non-contrast enhanced chest CT exams. The automated measurements were in agreement with manual measurements. The algorithm has sufficient sensitivity as indicated by the near unity slope of the automated versus manual measurement plots. Furthermore, the algorithm has sufficient accuracy as indicated by the high coefficient of determination, R2, values ranging from 0.879 to 0.998. Conclusion: Patient-specific image contrast can be measured from clinical datasets. The algorithm can be run on both contrast enhanced and non-enhanced clinical datasets. The method can be applied to automatically assess the contrast characteristics of clinical chest CT images and quantify dependencies that may not be captured in phantom data.« less

  3. FUN3D Airload Predictions for the Full-Scale UH-60A Airloads Rotor in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, Elizabeth M.; Biedron, Robert T.

    2013-01-01

    An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids, FUN3D, is used to compute the rotor performance and airloads of the UH-60A Airloads Rotor in the National Full-Scale Aerodynamic Complex (NFAC) 40- by 80-foot Wind Tunnel. The flow solver is loosely coupled to a rotorcraft comprehensive code, CAMRAD-II, to account for trim and aeroelastic deflections. Computations are made for the 1-g level flight speed-sweep test conditions with the airloads rotor installed on the NFAC Large Rotor Test Apparatus (LRTA) and in the 40- by 80-ft wind tunnel to determine the influence of the test stand and wind-tunnel walls on the rotor performance and airloads. Detailed comparisons are made between the results of the CFD/CSD simulations and the wind tunnel measurements. The computed trends in solidity-weighted propulsive force and power coefficient match the experimental trends over the range of advance ratios and are comparable to previously published results. Rotor performance and sectional airloads show little sensitivity to the modeling of the wind-tunnel walls, which indicates that the rotor shaft-angle correction adequately compensates for the wall influence up to an advance ratio of 0.37. Sensitivity of the rotor performance and sectional airloads to the modeling of the rotor with the LRTA body/hub increases with advance ratio. The inclusion of the LRTA in the simulation slightly improves the comparison of rotor propulsive force between the computation and wind tunnel data but does not resolve the difference in the rotor power predictions at mu = 0.37. Despite a more precise knowledge of the rotor trim loads and flight condition, the level of comparison between the computed and measured sectional airloads/pressures at an advance ratio of 0.37 is comparable to the results previously published for the high-speed flight test condition.

  4. The formation and analysis of a 5 deg equal area block terrestrial gravity field

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1972-01-01

    A set of 23,355 1 degree x 1 degree mean free air anomalies were used to predict a set of 5 degree equal area anomalies and their standard errors. Using the 1 degree data incorporating geophysically predicted values of ACIC, 1283 5 degree blocks were computed. Excluding the geophysically predicted anomalies 1249 blocks were computed. The 1 degree data were also used to compute covariance functions and the equatorial gravity and flattening implied by this data. The predicted anomalies were supplemented by model anomalies to form a complete 1654 global anomaly field. These data were used in a weighted least squares to determine potential coefficients to degree 15, and in a summation type formulation to determine potential coefficients to degree 25. These potential coefficients sets are compared to recent satellite determinations.

  5. MBSSAS: A code for the computation of margules parameters and equilibrium relations in binary solid-solution aqueous-solution systems

    USGS Publications Warehouse

    Glynn, P.D.

    1991-01-01

    The computer code MBSSAS uses two-parameter Margules-type excess-free-energy of mixing equations to calculate thermodynamic equilibrium, pure-phase saturation, and stoichiometric saturation states in binary solid-solution aqueous-solution (SSAS) systems. Lippmann phase diagrams, Roozeboom diagrams, and distribution-coefficient diagrams can be constructed from the output data files, and also can be displayed by MBSSAS (on IBM-PC compatible computers). MBSSAS also will calculate accessory information, such as the location of miscibility gaps, spinodal gaps, critical-mixing points, alyotropic extrema, Henry's law solid-phase activity coefficients, and limiting distribution coefficients. Alternatively, MBSSAS can use such information (instead of the Margules, Guggenheim, or Thompson and Waldbaum excess-free-energy parameters) to calculate the appropriate excess-free-energy of mixing equation for any given SSAS system. ?? 1991.

  6. Downward continuation of the free-air gravity anomalies to the ellipsoid using the gradient solution and terrain correction: An attempt of global numerical computations

    NASA Technical Reports Server (NTRS)

    Wang, Y. M.

    1989-01-01

    The formulas for the determination of the coefficients of the spherical harmonic expansion of the disturbing potential of the earth are defined for data given on a sphere. In order to determine the spherical harmonic coefficients, the gravity anomalies have to be analytically downward continued from the earth's surface to a sphere-at least to the ellipsoid. The goal is to continue the gravity anomalies from the earth's surface downward to the ellipsoid using recent elevation models. The basic method for the downward continuation is the gradient solution (the g sub 1 term). The terrain correction was also computed because of the role it can play as a correction term when calculating harmonic coefficients from surface gravity data. The fast Fourier transformation was applied to the computations.

  7. Bottom Extreme-Ultraviolet-Sensitive Coating for Evaluation of the Absorption Coefficient of Ultrathin Film

    NASA Astrophysics Data System (ADS)

    Hijikata, Hayato; Kozawa, Takahiro; Tagawa, Seiichi; Takei, Satoshi

    2009-06-01

    A bottom extreme-ultraviolet-sensitive coating (BESC) for evaluation of the absorption coefficients of ultrathin films such as extreme ultraviolet (EUV) resists was developed. This coating consists of a polymer, crosslinker, acid generator, and acid-responsive chromic dye and is formed by a conventional spin-coating method. By heating the film after spin-coating, a crosslinking reaction is induced and the coating becomes insoluble. A typical resist solution can be spin-coated on a substrate covered with the coating film. The evaluation of the linear absorption coefficients of polymer films was demonstrated by measuring the EUV absorption of BESC substrates on which various polymers were spin-coated.

  8. Efficient iodine-free dye-sensitized solar cells employing truxene-based organic dyes.

    PubMed

    Zong, Xueping; Liang, Mao; Chen, Tao; Jia, Jiangnan; Wang, Lina; Sun, Zhe; Xue, Song

    2012-07-07

    Two new truxene-based organic sensitizers (M15 and M16) featuring high extinction coefficients were synthesized for dye-sensitized solar cells employing cobalt electrolyte. The M16-sensitized device displays a 7.6% efficiency at an irradiation of AM1.5 full sunlight.

  9. Computational Simulations of Convergent Nozzles for the AIAA 1st Propulsion Aerodynamics Workshop

    NASA Technical Reports Server (NTRS)

    Dippold, Vance F., III

    2014-01-01

    Computational Fluid Dynamics (CFD) simulations were completed for a series of convergent nozzles in participation of the American Institute of Aeronautics and Astronautics (AIAA) 1st Propulsion Aerodynamics Workshop. The simulations were performed using the Wind-US flow solver. Discharge and thrust coefficients were computed for four axisymmetric nozzles with nozzle pressure ratios (NPR) ranging from 1.4 to 7.0. The computed discharge coefficients showed excellent agreement with available experimental data; the computed thrust coefficients captured trends observed in the experimental data, but over-predicted the thrust coefficient by 0.25 to 1.0 percent. Sonic lines were computed for cases with NPR >= 2.0 and agreed well with experimental data for NPR >= 2.5. Simulations were also performed for a 25 deg. conic nozzle bifurcated by a flat plate at NPR = 4.0. The jet plume shock structure was compared with and without the splitter plate to the experimental data. The Wind-US simulations predicted the shock structure well, though lack of grid resolution in the plume reduced the sharpness of the shock waves. Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulations and Detached Eddy Simulations (DES) were performed at NPR = 1.6 for the 25 deg conic nozzle with splitter plate. The simulations predicted vortex shedding from the trailing edge of the splitter plate. However, the vortices of URANS and DES solutions appeared to dissipate earlier than observed experimentally. It is believed that a lack of grid resolution in the region of the vortex shedding may have caused the vortices to break down too soon

  10. Time-resolved absorption and hemoglobin concentration difference maps: a method to retrieve depth-related information on cerebral hemodynamics.

    NASA Astrophysics Data System (ADS)

    Montcel, Bruno; Chabrier, Renée; Poulet, Patrick

    2006-12-01

    Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.

  11. Time-resolved absorption and hemoglobin concentration difference maps: a method to retrieve depth-related information on cerebral hemodynamics.

    PubMed

    Montcel, Bruno; Chabrier, Renée; Poulet, Patrick

    2006-12-11

    Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.

  12. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Isolating Curvature Effects in Computing Wall-Bounded Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2001-01-01

    The flow over the zero-pressure-gradient So-Mellor convex curved wall is simulated using the Navier-Stokes equations. An inviscid effective outer wall shape, undocumented in the experiment, is obtained by using an adjoint optimization method with the desired pressure distribution on the inner wall as the cost function. Using this wall shape with a Navier-Stokes method, the abilities of various turbulence models to simulate the effects of curvature without the complicating factor of streamwise pressure gradient can be evaluated. The one-equation Spalart-Allmaras turbulence model overpredicts eddy viscosity, and its boundary layer profiles are too full. A curvature-corrected version of this model improves results, which are sensitive to the choice of a particular constant. An explicit algebraic stress model does a reasonable job predicting this flow field. However, results can be slightly improved by modifying the assumption on anisotropy equilibrium in the model's derivation. The resulting curvature-corrected explicit algebraic stress model possesses no heuristic functions or additional constants. It lowers slightly the computed skin friction coefficient and the turbulent stress levels for this case (in better agreement with experiment), but the effect on computed velocity profiles is very small.

  14. Computations and estimates of rate coefficients for hydrocarbon reactions of interest to the atmospheres of outer solar system

    NASA Technical Reports Server (NTRS)

    Laufer, A. H.; Gardner, E. P.; Kwok, T. L.; Yung, Y. L.

    1983-01-01

    The rate coefficients, including Arrhenius parameters, have been computed for a number of chemical reactions involving hydrocarbon species for which experimental data are not available and which are important in planetary atmospheric models. The techniques used to calculate the kinetic parameters include the Troe and semiempirical bond energy-bond order (BEBO) or bond strength-bond length (BSBL) methods.

  15. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  16. Computational prediction of ionic liquid 1-octanol/water partition coefficients.

    PubMed

    Kamath, Ganesh; Bhatnagar, Navendu; Baker, Gary A; Baker, Sheila N; Potoff, Jeffrey J

    2012-04-07

    Wet 1-octanol/water partition coefficients (log K(ow)) predicted for imidazolium-based ionic liquids using adaptive bias force-molecular dynamics (ABF-MD) simulations lie in excellent agreement with experimental values. These encouraging results suggest prospects for this computational tool in the a priori prediction of log K(ow) values of ionic liquids broadly with possible screening implications as well (e.g., prediction of CO(2)-philic ionic liquids).

  17. Parameters and computer software for the evaluation of mass attenuation and mass energy-absorption coefficients for body tissues and substitutes.

    PubMed

    Okunade, Akintunde A

    2007-07-01

    The mass attenuation and energy-absorption coefficients (radiation interaction data), which are widely used in the shielding and dosimetry of X-rays used for medical diagnostic and orthovoltage therapeutic procedures, are strongly dependent on the energy of photons, elements and percentage by weight of elements in body tissues and substitutes. Significant disparities exist in the values of percentage by weight of elements reported in literature for body tissues and substitutes for individuals of different ages, genders and states of health. Often, interested parties are in need of these radiation interaction data for body tissues or substitutes with percentage by weight of elements and intermediate energies that are not tabulated in literature. To provide for the use of more precise values of these radiation interaction data, parameters and computer programs, MUA_T and MUEN_T are presented for the computation of mass attenuation and energy-absorption coefficients for body tissues and substitutes of arbitrary percentage-by-weight elemental composition and photon energy ranging between 1 keV (or k-edge) and 400 keV. Results are presented, which show that the values of mass attenuation and energy-absorption coefficients obtained from computer programs are in good agreement with those reported in literature.

  18. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  19. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    PubMed

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Patient-specific estimation of spatially variant image noise for a pinhole cardiac SPECT camera.

    PubMed

    Cuddy-Walsh, Sarah G; Wells, R Glenn

    2018-05-01

    New single photon emission computed tomography (SPECT) cameras using fixed pinhole collimation are increasingly popular. Pinhole collimators are known to have variable sensitivity with distance and angle from the pinhole aperture. It follows that pinhole SPECT systems will also have spatially variant sensitivity and hence spatially variant image noise. The objective of this study was to develop and validate a rapid method for analytically estimating a map of the noise magnitude in a reconstructed image using data from a single clinical acquisition. The projected voxel (PV) noise estimation method uses a modified forward projector with attenuation effects to estimate the number of photons detected from each voxel in the field-of-view. We approximate the noise for each voxel as the standard deviation of a Poisson distribution with a mean equal to the number of detected photons. An empirical formula is used to address scaling discrepancies caused by image reconstruction. Calibration coefficients are determined for the PV method by comparing it with noise measured from a nonparametrically bootstrapped set of images of a spherical uniformly filled Tc-99m water phantom. Validation studies compare PV noise estimates with bootstrapped measured noise for 31 patient images (5 min, 340 MBq, 99m Tc-tetrofosmin rest study). Bland-Altman analysis shows R 2 correlations ≥70% between the PV-estimated and -measured image noise. For the 31 patient cardiac images, the PV noise estimate has an average bias of 0.1% compared to bootstrapped noise and have a coefficient of variation (CV) ≤ 17%. The bootstrap approach to noise measurement requires 5 h of computation for each image, whereas the PV noise estimate requires only 64 s. In cardiac images, image noise due to attenuation and camera sensitivity varies on average from 4% at the apex to 9% in the basal posterior region of the heart. The standard deviation between 15 healthy patient study images (including physiological variability in the population) ranges from 6% to 16.5% over the length of the heart. The PV method provides a rapid estimate for spatially variant patient-specific image noise magnitude in a pinhole-collimated dedicated cardiac SPECT camera with a bias of -0.3% and better than 83% precision. © 2018 American Association of Physicists in Medicine.

  1. GASP- General Aviation Synthesis Program. Volume 3: Aerodynamics

    NASA Technical Reports Server (NTRS)

    Hague, D.

    1978-01-01

    Aerodynamics calculations are treated in routines which concern moments as they vary with flight conditions and attitude. The subroutines discussed: (1) compute component equivalent flat plate and wetted areas and profile drag; (2) print and plot low and high speed drag polars; (3) determine life coefficient or angle of attack; (4) determine drag coefficient; (5) determine maximum lift coefficient and drag increment for various flap types and flap settings; and (6) determine required lift coefficient and drag coefficient in cruise flight.

  2. Eikonalization of conformal blocks

    DOE PAGES

    Fitzpatrick, A. Liam; Kaplan, Jared; Walters, Matthew T.; ...

    2015-09-03

    Classical field configurations such as the Coulomb potential and Schwarzschild solution are built from the t-channel exchange of many light degrees of freedom. We study the CFT analog of this phenomenon, which we term the 'eikonalization' of conformal blocks. We show that when an operator T appears in the OPE Ο(x)Ο(0), then the large spin Fock space states [TT···T] ℓ also appear in this OPE with a computable coefficient. The sum over the exchange of these Fock space states in an correlator build the classical 'T field' in the dual AdS description. In some limits the sum of all Fockmore » space exchanges can be represented as the exponential of a single T exchange in the 4-pt correlator of O. Our results should be useful for systematizing 1/ℓ perturbation theory in general CFTs and simplifying the computation of large spin OPE coefficients. As examples we obtain the leading log ℓ dependence of Fock space conformal block coefficients, and we directly compute the OPE coefficients of the simplest ‘triple-trace’ operators.« less

  3. User's Manual for FOMOCO Utilities-Force and Moment Computation Tools for Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Buning, Pieter G.

    1996-01-01

    In the numerical computations of flows around complex configurations, accurate calculations of force and moment coefficients for aerodynamic surfaces are required. When overset grid methods are used, the surfaces on which force and moment coefficients are sought typically consist of a collection of overlapping surface grids. Direct integration of flow quantities on the overlapping grids would result in the overlapped regions being counted more than once. The FOMOCO Utilities is a software package for computing flow coefficients (force, moment, and mass flow rate) on a collection of overset surfaces with accurate accounting of the overlapped zones. FOMOCO Utilities can be used in stand-alone mode or in conjunction with the Chimera overset grid compressible Navier-Stokes flow solver OVERFLOW. The software package consists of two modules corresponding to a two-step procedure: (1) hybrid surface grid generation (MIXSUR module), and (2) flow quantities integration (OVERINT module). Instructions on how to use this software package are described in this user's manual. Equations used in the flow coefficients calculation are given in Appendix A.

  4. Computer aided detection in prostate cancer diagnostics: A promising alternative to biopsy? A retrospective study from 104 lesions with histological ground truth.

    PubMed

    Thon, Anika; Teichgräber, Ulf; Tennstedt-Schenk, Cornelia; Hadjidemetriou, Stathis; Winzler, Sven; Malich, Ansgar; Papageorgiou, Ismini

    2017-01-01

    Prostate cancer (PCa) diagnosis by means of multiparametric magnetic resonance imaging (mpMRI) is a current challenge for the development of computer-aided detection (CAD) tools. An innovative CAD-software (Watson Elementary™) was proposed to achieve high sensitivity and specificity, as well as to allege a correlate to Gleason grade. To assess the performance of Watson Elementary™ in automated PCa diagnosis in our hospital´s database of MRI-guided prostate biopsies. The evaluation was retrospective for 104 lesions (47 PCa, 57 benign) from 79, 64.61±6.64 year old patients using 3T T2-weighted imaging, Apparent Diffusion Coefficient (ADC) maps and dynamic contrast enhancement series. Watson Elementary™ utilizes signal intensity, diffusion properties and kinetic profile to compute a proportional Gleason grade predictor, termed Malignancy Attention Index (MAI). The analysis focused on (i) the CAD sensitivity and specificity to classify suspect lesions and (ii) the MAI correlation with the histopathological ground truth. The software revealed a sensitivity of 46.80% for PCa classification. The specificity for PCa was found to be 75.43% with a positive predictive value of 61.11%, a negative predictive value of 63.23% and a false discovery rate of 38.89%. CAD classified PCa and benign lesions with equal probability (P 0.06, χ2 test). Accordingly, receiver operating characteristic analysis suggests a poor predictive value for MAI with an area under curve of 0.65 (P 0.02), which is not superior to the performance of board certified observers. Moreover, MAI revealed no significant correlation with Gleason grade (P 0.60, Pearson´s correlation). The tested CAD software for mpMRI analysis was a weak PCa biomarker in this dataset. Targeted prostate biopsy and histology remains the gold standard for prostate cancer diagnosis.

  5. Computer aided detection in prostate cancer diagnostics: A promising alternative to biopsy? A retrospective study from 104 lesions with histological ground truth

    PubMed Central

    Thon, Anika; Teichgräber, Ulf; Tennstedt-Schenk, Cornelia; Hadjidemetriou, Stathis; Winzler, Sven; Malich, Ansgar

    2017-01-01

    Background Prostate cancer (PCa) diagnosis by means of multiparametric magnetic resonance imaging (mpMRI) is a current challenge for the development of computer-aided detection (CAD) tools. An innovative CAD-software (Watson Elementary™) was proposed to achieve high sensitivity and specificity, as well as to allege a correlate to Gleason grade. Aim/Objective To assess the performance of Watson Elementary™ in automated PCa diagnosis in our hospital´s database of MRI-guided prostate biopsies. Methods The evaluation was retrospective for 104 lesions (47 PCa, 57 benign) from 79, 64.61±6.64 year old patients using 3T T2-weighted imaging, Apparent Diffusion Coefficient (ADC) maps and dynamic contrast enhancement series. Watson Elementary™ utilizes signal intensity, diffusion properties and kinetic profile to compute a proportional Gleason grade predictor, termed Malignancy Attention Index (MAI). The analysis focused on (i) the CAD sensitivity and specificity to classify suspect lesions and (ii) the MAI correlation with the histopathological ground truth. Results The software revealed a sensitivity of 46.80% for PCa classification. The specificity for PCa was found to be 75.43% with a positive predictive value of 61.11%, a negative predictive value of 63.23% and a false discovery rate of 38.89%. CAD classified PCa and benign lesions with equal probability (P 0.06, χ2 test). Accordingly, receiver operating characteristic analysis suggests a poor predictive value for MAI with an area under curve of 0.65 (P 0.02), which is not superior to the performance of board certified observers. Moreover, MAI revealed no significant correlation with Gleason grade (P 0.60, Pearson´s correlation). Conclusion The tested CAD software for mpMRI analysis was a weak PCa biomarker in this dataset. Targeted prostate biopsy and histology remains the gold standard for prostate cancer diagnosis. PMID:29023572

  6. Radiation effects in silicon and gallium arsenide solar cells using isotropic and normally incident radiation

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Downing, R. G.

    1984-01-01

    Several types of silicon and gallium arsenide solar cells were irradiated with protons with energies between 50 keV and 10 MeV at both normal and isotropic incidence. Damage coefficients for maximum power relative to 10 MeV were derived for these cells for both cases of omni-directional and normal incidence. The damage coefficients for the silicon cells were found to be somewhat lower than those quoted in the Solar Cell Radiation Handbook. These values were used to compute omni-directional damage coefficients suitable for solar cells protected by coverglasses of practical thickness, which in turn were used to compute solar cell degradation in two proton-dominated orbits. In spite of the difference in the low energy proton damage coefficients, the difference between the handbook prediction and the prediction using the newly derived values was negligible. Damage coefficients for GaAs solar cells for short circuit current, open circuit voltage, and maximum power were also computed relative to 10 MeV protons. They were used to predict cell degradation in the same two orbits and in a 5600 nmi orbit. Results show the performance of the GaAs solar cells in these orbits to be superior to that of the Si cells.

  7. FUN3D Analyses in Support of the Second Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Heeg, Jennifer

    2016-01-01

    This paper presents the computational aeroelastic results generated in support of the second Aeroelastic Prediction Workshop for the Benchmark Supercritical Wing (BSCW) configurations and compares them to the experimental data. The computational results are obtained using FUN3D, an unstructured grid Reynolds- Averaged Navier-Stokes solver developed at NASA Langley Research Center. The analysis results include aerodynamic coefficients and surface pressures obtained for steady-state, static aeroelastic equilibrium, and unsteady flow due to a pitching wing or flutter prediction. Frequency response functions of the pressure coefficients with respect to the angular displacement are computed and compared with the experimental data. The effects of spatial and temporal convergence on the computational results are examined.

  8. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  9. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    ERIC Educational Resources Information Center

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  10. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  11. Estimating the Diffusion Coefficients of Sugars Using Diffusion Experiments in Agar-Gel and Computer Simulations.

    PubMed

    Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi

    2018-01-01

    The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.

  12. Extracting surface diffusion coefficients from batch adsorption measurement data: application of the classic Langmuir kinetics model.

    PubMed

    Chu, Khim Hoong

    2017-11-09

    Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6  cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.

  13. Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.

    1997-01-01

    The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.

  14. A Statistical Analysis of YORP Coefficients

    NASA Astrophysics Data System (ADS)

    McMahon, Jay W.; Scheeres, D.

    2013-10-01

    The YORP (Yarkovsky-O'Keefe-Radzievskii-Paddack) effect is theorized to be a major factor in the evolution of small asteroids (<10 km) in the near-Earth and main belt populations. YORP torques, which originate from absorbed sunlight and subsequent thermal radiation, causes secular changes in an asteroid's spin rate and spin vector orientation (e.g. Rubincam, Journal of Geophysical Research, 1995). This in turn controls the magnitude and direction of the Yarkovsky effect, which causes a drift in an asteroid's heliocentric semi-major axis (Vokrouhlicky and Farinella, Nature, 2000). YORP is also thought to be responsible for the creation of multiple asteroid systems and asteroid pairs through the process of rotational fission (Pravec et al, Nature, 2010). Despite the fact that the YORP effect has been measured on several asteroids (e.g. Taylor et al, Science, 2007 and Kaasalainen et al, Nature, 2007), it has proven very difficult to predict the effect accurately from a shape model due to the sensitivity of the YORP coefficients to shape changes (Statler, Icarus, 2009). This has been especially troublesome for Itokawa, for which a very detailed shape model is available (Scheeres et al, Icarus 2007; Breiter et al, Astronomy & Astrophysics, 2009). In this study, we compute the YORP coefficients for a number asteroids with detailed shape models available on the PDS-SBN. We then statistically perturb the asteroid shapes at the same resolution, creating a family of YORP coefficients for each shape. Next, we analyze the change in YORP coefficients between a shape model of accuracy obtainable from radar with one including small-scale topography on the surface as was observed on Itokawa. The combination of these families of coefficients will effectively give error bars on our knowledge of the YORP coefficients given a shape model of some accuracy. Finally, we discuss the statistical effect of boulder and craters, and the modification of these results due to recent studies on thermal beaming (Rozitis and Green, Mon. Not. R. Astron. Soc., 2012) and "tangential" YORP (Golubov and Krugly, The Astrophysical Journal Letters, 2012).

  15. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  16. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  17. Radial mixing in turbomachines

    NASA Astrophysics Data System (ADS)

    Segaert, P.; Hirsch, Ch.; Deruyck, J.

    1991-03-01

    A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.

  18. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  19. Temperature dependence of electron impact ionization coefficient in bulk silicon

    NASA Astrophysics Data System (ADS)

    Ahmed, Mowfaq Jalil

    2017-09-01

    This work exhibits a modified procedure to compute the electron impact ionization coefficient of silicon for temperatures between 77 and 800K and electric fields ranging from 70 to 400 kV/cm. The ionization coefficients are computed from the electron momentum distribution function through solving the Boltzmann transport equation (BTE). The arrangement is acquired by joining Legendre polynomial extension with BTE. The resulting BTE is solved by differences-differential method using MATLAB®. Six (X) equivalent ellipsoidal and non-parabolic valleys of the conduction band of silicon are taken into account. Concerning the scattering mechanisms, the interval acoustic scattering, non-polar optical scattering and II scattering are taken into consideration. This investigation showed that the ionization coefficients decrease with increasing temperature. The overall results are in good agreement with previous experimental and theoretical reported data predominantly at high electric fields.

  20. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  1. Effect of the computational domain size and shape on the self-diffusion coefficient in a Lennard-Jones liquid.

    PubMed

    Kikugawa, Gota; Ando, Shotaro; Suzuki, Jo; Naruke, Yoichi; Nakano, Takeo; Ohara, Taku

    2015-01-14

    In the present study, molecular dynamics (MD) simulations on the monatomic Lennard-Jones liquid in a periodic boundary system were performed in order to elucidate the effect of the computational domain size and shape on the self-diffusion coefficient measured by the system. So far, the system size dependence in cubic computational domains has been intensively investigated and these studies showed that the diffusion coefficient depends linearly on the inverse of the system size, which is theoretically predicted based on the hydrodynamic interaction. We examined the system size effect not only in the cubic cell systems but also in rectangular cell systems which were created by changing one side length of the cubic cell with the system density kept constant. As a result, the diffusion coefficient in the direction perpendicular to the long side of the rectangular cell significantly increases more or less linearly with the side length. On the other hand, the diffusion coefficient in the direction along the long side is almost constant or slightly decreases. Consequently, anisotropy of the diffusion coefficient emerges in a rectangular cell with periodic boundary conditions even in a bulk liquid simulation. This unexpected result is of critical importance because rectangular fluid systems confined in nanospace, which are present in realistic nanoscale technologies, have been widely studied in recent MD simulations. In order to elucidate the underlying mechanism for this serious system shape effect on the diffusion property, the correlation structures of particle velocities were examined.

  2. Thermal Rate Coefficients for the Astrochemical Process C + CH+ → C2+ + H by Ring Polymer Molecular Dynamics.

    PubMed

    Rampino, Sergio; Suleimanov, Yury V

    2016-12-22

    Thermal rate coefficients for the astrochemical reaction C + CH + → C 2 + + H were computed in the temperature range 20-300 K by using novel rate theory based on ring polymer molecular dynamics (RPMD) on a recently published bond-order based potential energy surface and compared with previous Langevin capture model (LCM) and quasi-classical trajectory (QCT) calculations. Results show that there is a significant discrepancy between the RPMD rate coefficients and the previous theoretical results that can lead to overestimation of the rate coefficients for the title reaction by several orders of magnitude at very low temperatures. We argue that this can be attributed to a very challenging energy profile along the reaction coordinate for the title reaction, not taken into account in extenso by either the LCM or QCT approximation. In the absence of any rigorous quantum mechanical or experimental results, the computed RPMD rate coefficients represent state-of-the-art estimates to be included in astrochemical databases and kinetic networks.

  3. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients. Revised

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2002-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  4. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2001-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  5. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.

  6. Computational Analysis of a Wing Designed for the X-57 Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Jeffrey K.; Viken, Sally A.; Carter, Melissa B.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of the wing for the distributed electric propulsion X-57 Maxwell airplane configuration at cruise and takeoff/landing conditions was completed. Two unstructured-mesh, Navier-Stokes computational fluid dynamics methods, FUN3D and USM3D, were used to predict the wing performance. The goal of the X-57 wing and distributed electric propulsion system design was to meet or exceed the required lift coefficient 3.95 for a stall speed of 58 knots, with a cruise speed of 150 knots at an altitude of 8,000 ft. The X-57 Maxwell airplane was designed with a small, high aspect ratio cruise wing that was designed for a high cruise lift coefficient (0.75) at angle of attack of 0deg. The cruise propulsors at the wingtip rotate counter to the wingtip vortex and reduce induced drag by 7.5 percent at an angle of attack of 0.6deg. The unblown maximum lift coefficient of the high-lift wing (with the 30deg flap setting) is 2.439. The stall speed goal performance metric was confirmed with a blown wing computed effective lift coefficient of 4.202. The lift augmentation from the high-lift, distributed electric propulsion system is 1.7. The predicted cruise wing drag coefficient of 0.02191 is 0.00076 above the drag allotted for the wing in the original estimate. However, the predicted drag overage for the wing would only use 10.1 percent of the original estimated drag margin, which is 0.00749.

  7. Weibull crack density coefficient for polydimensional stress states

    NASA Technical Reports Server (NTRS)

    Gross, Bernard; Gyekenyesi, John P.

    1989-01-01

    A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.

  8. Bulk-Flow Analysis of Hybrid Thrust Bearings for Advanced Cryogenic Turbopumps

    NASA Technical Reports Server (NTRS)

    SanAndres, Luis

    1998-01-01

    A bulk-flow analysis and computer program for prediction of the static load performance and dynamic force coefficients of angled injection, orifice-compensated hydrostatic/hydrodynamic thrust bearings have been completed. The product of the research is an efficient computational tool for the design of high-speed thrust bearings for cryogenic fluid turbopumps. The study addresses the needs of a growing technology that requires of reliable fluid film bearings to provide the maximum operating life with optimum controllable rotordynamic characteristics at the lowest cost. The motion of a cryogenic fluid on the thin film lands of a thrust bearing is governed by a set of bulk-flow mass and momentum conservation and energy transport equations. Mass flow conservation and a simple model for momentum transport within the hydrostatic bearing recesses are also accounted for. The bulk-flow model includes flow turbulence with fluid inertia advection, Coriolis and centrifugal acceleration effects on the bearing recesses and film lands. The cryogenic fluid properties are obtained from realistic thermophysical equations of state. Turbulent bulk-flow shear parameters are based on Hirs' model with Moody's friction factor equations allowing a simple simulation for machined bearing surface roughness. A perturbation analysis leads to zeroth-order nonlinear equations governing the fluid flow for the thrust bearing operating at a static equilibrium position, and first-order linear equations describing the perturbed fluid flow for small amplitude shaft motions in the axial direction. Numerical solution to the zeroth-order flow field equations renders the bearing flow rate, thrust load, drag torque and power dissipation. Solution to the first-order equations determines the axial stiffness, damping and inertia force coefficients. The computational method uses well established algorithms and generic subprograms available from prior developments. The Fortran9O computer program hydrothrust runs on a Windows 95/NT personal computer. The program, help files and examples are licensed by Texas A&M University Technology License Office. The study of the static and dynamic performance of two hydrostatic/hydrodynamic bearings demonstrates the importance of centrifugal and advection fluid inertia effects for operation at high rotational speeds. The first example considers a conceptual hydrostatic thrust bearing for an advanced liquid hydrogen turbopump operating at 170,000 rpm. The large axial stiffness and damping coefficients of the bearing should provide accurate control and axial positioning of the turbopump and also allow for unshrouded impellers, therefore increasing the overall pump efficiency. The second bearing uses a refrigerant R134a, and its application in oil-free air conditioning compressors is of great technological importance and commercial value. The computed predictions reveal that the LH2 bearing load capacity and flow rate increase with the recess pressure (i.e. increasing orifice diameters). The bearing axial stiffness has a maximum for a recess pressure rati of approx. 0.55. while the axial damping coefficient decreases as the recess pressure ratio increases. The computer results from three flow models are compared. These models are a) inertialess, b) fluid inertia at recess edges only, and c) full fluid inertia at both recess edges and film lands. The full inertia model shows the lowest flow rates, axial load capacity and stiffness coefficient but on the other hand renders the largest damping coefficients and inertia coefficients. The most important findings are related to the reduction of the outflow through the inner radius and the appearance of subambient pressures. The performance of the refrigerant hybrid thrust bearing is evaluated at two operating speeds and pressure drops. The computed results are presented in dimensionless form to evidence consistent trends in the bearing performance characteristics. As the applied axial load increases, the bearing film thickness and flow rate decrease while the recess pressure increases. The axial stiffness coefficient shows a maximum for a certain intermediate load while the damping coefficient steadily increases. The computed results evidence the paramount of centrifugal fluid inertia at low recess pressures (i.e. low loads), and where there is actually an inflow through the bearing inner diameter, accompanied by subambient pressures just downstream of the bearing recess edge. These results are solely due to centrifugal fluid inertia and advection transport effects. Recommendations include the extension of the computer program to handle flexure pivot tilting pad hybrid bearings and the ability to calculate moment coefficients for shaft angular misalignments.

  9. A sensitivity study of the effects of evaporation/condensation accommodation coefficients on transient heat pipe modeling

    NASA Astrophysics Data System (ADS)

    Hall, Michael L.; Doster, J. Michael

    1990-03-01

    The dynamic behavior of liquid metal heat pipe models is strongly influenced by the choice of evaporation and condensation modeling techniques. Classic kinetic theory descriptions of the evaporation and condensation processes are often inadequate for real situations; empirical accommodation coefficients are commonly utilized to reflect nonideal mass transfer rates. The complex geometries and flow fields found in proposed heat pipe systems cause considerable deviation from the classical models. the THROHPUT code, which has been described in previous works, was developed to model transient liquid metal heat pipe behavior from frozen startup conditions to steady state full power operation. It is used here to evaluate the sensitivity of transient liquid metal heat pipe models to the choice of evaporation and condensation accommodation coefficients. Comparisons are made with experimental liquid metal heat pipe data. It is found that heat pipe behavior can be predicted with the proper choice of the accommodation coefficients. However, the common assumption of spatially constant accommodation coefficients is found to be a limiting factor in the model.

  10. Critique and sensitivity analysis of the compensation function used in the LMS Hudson River striped bass models. Environmental Sciences Division publication No. 944

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Winkle, W.; Christensen, S.W.; Kauffman, G.

    1976-12-01

    The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison ofmore » the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.« less

  11. Correction for frequency-dependent hydrophone response to nonlinear pressure waves using complex deconvolution and rarefactional filtering: application with fiber optic hydrophones.

    PubMed

    Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R

    2015-01-01

    Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.

  12. [Similarity system theory to evaluate similarity of chromatographic fingerprints of traditional Chinese medicine].

    PubMed

    Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu

    2005-03-01

    The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.

  13. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  14. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  15. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  16. Establishing and validating the fluorescent amyloid ligand h-FTAA (heptamer formyl thiophene acetic acid) to identify transthyretin amyloid deposits in carpal tunnel syndrome.

    PubMed

    Hahn, Katharina; Nilsson, K Peter R; Hammarström, Per; Urban, Peter; Meliss, Rolf Rüdiger; Behrens, Hans-Michael; Krüger, Sandra; Röcken, Christoph

    2017-06-01

    Transthyretin-derived (ATTR) amyloidosis is a frequent finding in carpal tunnel syndrome. We tested the following hypotheses: the novel fluorescent amyloid ligand heptameric formic thiophene acetic acid (h-FTAA) has a superior sensitivity for the detection of amyloid compared with Congo red-staining; Amyloid load correlates with patient gender and/or patient age. We retrieved 208 resection specimens obtained from 184 patients with ATTR amyloid in the carpal tunnel. Serial sections were stained with Congo red, h-FTAA and an antibody directed against transthyretin (TTR). Stained sections were digitalized and forwarded to computational analyses. The amount of amyloid was correlated with patient demographics. Amyloid stained intensely with h-FTAA and an anti-TTR-antibody. Congo red-staining combined with fluorescence microscopy was significantly less sensitive than h-FTAA-fluorescence and TTR-immunostaining: the highest percentage area was found in TTR-immunostained sections, followed by h-FTAA and Congo red. The Pearson correlation coefficient was .8 (Congo red vs. h-FTAA) and .9 (TTR vs. h-FTAA). Amyloid load correlated with patient gender, anatomical site and patient age. h-FTAA is a highly sensitive method to detect even small amounts of ATTR amyloid in the carpal tunnel. The staining protocol is easy and h-FTAA may be a much more sensitive procedure to detect amyloid at an earlier stage.

  17. Exact quantum scattering calculation of transport properties for free radicals: OH(X2Π)-helium.

    PubMed

    Dagdigian, Paul J; Alexander, Millard H

    2012-09-07

    Transport properties for OH-He are computed through quantum scattering calculations using the ab initio potential energy surfaces determined by Lee et al. [J. Chem. Phys. 113, 5736 (2000)]. To gauge the importance of the open-shell character of OH and the anisotropy of the potential on the transport properties, including the collision integrals Ω((1,1)) and Ω((2,2)), as well as the diffusion coefficient, calculations were performed with the full potential, with the difference potential V(dif) set to zero, and with only the spherical average of the potential. Slight differences (3%-5%) in the computed diffusion coefficient were found between the values obtained using the full potential and the truncated potentials. The computed diffusion coefficients were compared to recent experimental measurements and those computed with a Lennard-Jones (LJ) 12-6 potential. The values obtained with the full potential were slightly higher than the experimental values. The LJ 12-6 potential was found to underestimate the variation in temperature as compared to that obtained using the full OH-He ab initio potential.

  18. Journal and Wave Bearing Impedance Calculation Software

    NASA Technical Reports Server (NTRS)

    Hanford, Amanda; Campbell, Robert

    2012-01-01

    The wave bearing software suite is a MALTA application that computes bearing properties for user-specified wave bearing conditions, as well as plain journal bearings. Wave bearings are fluid film journal bearings with multi-lobed wave patterns around the circumference of the bearing surface. In this software suite, the dynamic coefficients are outputted in a way for easy implementation in a finite element model used in rotor dynamics analysis. The software has a graphical user interface (GUI) for inputting bearing geometry parameters, and uses MATLAB s structure interface for ease of interpreting data. This innovation was developed to provide the stiffness and damping components of wave bearing impedances. The computational method for computing bearing coefficients was originally designed for plain journal bearings and tilting pad bearings. Modifications to include a wave bearing profile consisted of changing the film thickness profile given by an equation, and writing an algorithm to locate the integration limits for each fluid region. Careful consideration was needed to implement the correct integration limits while computing the dynamic coefficients, depending on the form of the input/output variables specified in the algorithm.

  19. Modification of Hazen's equation in coarse grained soils by soft computing techniques

    NASA Astrophysics Data System (ADS)

    Kaynar, Oguz; Yilmaz, Isik; Marschalko, Marian; Bednarik, Martin; Fojtova, Lucie

    2013-04-01

    Hazen first proposed a Relationship between coefficient of permeability (k) and effective grain size (d10) was first proposed by Hazen, and it was then extended by some other researchers. However many attempts were done for estimation of k, correlation coefficients (R2) of the models were generally lower than ~0.80 and whole grain size distribution curves were not included in the assessments. Soft computing techniques such as; artificial neural networks, fuzzy inference systems, genetic algorithms, etc. and their hybrids are now being successfully used as an alternative tool. In this study, use of some soft computing techniques such as Artificial Neural Networks (ANNs) (MLP, RBF, etc.) and Adaptive Neuro-Fuzzy Inference System (ANFIS) for prediction of permeability of coarse grained soils was described, and Hazen's equation was then modificated. It was found that the soft computing models exhibited high performance in prediction of permeability coefficient. However four different kinds of ANN algorithms showed similar prediction performance, results of MLP was found to be relatively more accurate than RBF models. The most reliable prediction was obtained from ANFIS model.

  20. Natural dye sensitizer from cassava (Manihot utilissima) leaves extract and its adsorption onto TiO2 photo-anode

    NASA Astrophysics Data System (ADS)

    Nurlela; Wibowo, R.; Gunlazuardi, J.

    2017-04-01

    Interaction between TiO2 and dyes sensitizer have been studied. The chlorophyll presents in the crude leave extract (CLE-dye) from cassava (Manihot utilissima) was immobilized on to the photo-anode, consists of TiO2 supported by fluor doped Tin oxide (SnO2-F) Glass. The TiO2 was prepared by Rapid Breakdown Anodization (RBA) method then immobilized on to glass coated by SnO2-F using doctor blade technique, to give CLE-dye/TiO2/SnO2-F/Glass photo-anode. The prepared photo-anode was characterized by UV-Vis-DRS, FTIR, XRD, SEM, electrochemical and spectro-electrochemical systems. In this study, the HOMO (highest occupied molecular orbital) and LUMO (lowest unoccupied molecular orbital) energy level of the CLE-dye were empirically determined by cyclic voltammetry method, while spectro-electro-chemistry method was used to determine the coefficient of degradation and formation of the dyes, and diffusion coefficient of the hole recombination as well. Good anchoring between TiO2 with dye extracts (CLE-dye) can be seen from value of dye LUMO energy level (-4.26 eV), which is approaching the conduction band of TiO2 (-4.3 eV). The coefficient of degradation and formation of the CLE-dye showed a quasi reversible and diffusion coefficient hole recombination values were small, indicated that it is quite suitable as a sensitizer in a dyes sensitized solar cell.

  1. PAN AIR: A Computer Program for Predicting Subsonic or Supersonic Linear Potential Flows About Arbitrary Configurations Using a Higher Order Panel Method. Volume 1; Theory Document (Version 1.1)

    NASA Technical Reports Server (NTRS)

    Magnus, Alfred E.; Epton, Michael A.

    1981-01-01

    An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.

  2. Perceived Pain Extent is Not Associated With Widespread Pressure Pain Sensitivity, Clinical Features, Related Disability, Anxiety, or Depression in Women With Episodic Migraine.

    PubMed

    Fernández-de-Las-Peñas, Cesar; Falla, Deborah; Palacios-Ceña, María; Fuensalida-Novo, Stella; Arias-Buría, Jose L; Schneebeli, Alessandro; Arend-Nielsen, Lars; Barbero, Marco

    2018-03-01

    People with migraine present with varying pain extent and an expanded distribution of perceived pain may reflect central sensitization. The relationship between pain extent and clinical features, psychological outcomes, related disability, and pressure pain sensitivity in migraine has been poorly investigated. Our aim was to investigate whether the perceived pain extent, assessed from pain drawings, relates to measures of pressure pain sensitivity, clinical, psychological outcomes, and related disability in women with episodic migraine. A total of 72 women with episodic migraine completed pain drawings, which were subsequently digitized allowing pain extent to be calculated utilising novel software. Pressure pain thresholds were assessed bilaterally over the temporalis muscle (trigeminal area), the cervical spine (extratrigeminal area), and tibialis anterior muscle (distant pain-free area). Clinical features of migraine, migraine-related disability (migraine disability assessment questionnaire [MIDAS]), and anxiety and depression (Hospital Anxiety-Depression Scale [HADS]) were also assessed. Spearman ρ correlation coefficients were computed to reveal correlations between pain extent and the remaining outcomes. No significant associations were observed between pain extent and pressure pain thresholds in trigeminal, extratrigeminal or distant pain-free areas, migraine pain features, or psychological variables including anxiety or depression, and migraine-related disability. Pain extent within the trigeminocervical area was not associated with any of the measured clinical outcomes and not related to the degree of pressure pain sensitization in women with episodic migraine. Further research is needed to determine if the presence of expanded pain areas outside of the trigeminal area can play a relevant role in the sensitization processes in migraine.

  3. Sensitivity and specificity of the method used for ascertainment of healthcare-associated infections in the second Slovenian national prevalence survey.

    PubMed

    Serdt, Mojca; Lejko Zupanc, Tatjana; Korošec, Aleš; Klavs, Irena

    2016-12-01

    The second Slovenian national healthcare-associated infections (HAIs) prevalence survey (SNHPS) was conducted in acute-care hospitals in 2011. The objective was to assess the sensitivity and specificity of the method used for the ascertainment of six types of HAIs (bloodstream infections, catheter-associated infections, lower respiratory tract infections, pneumoniae, surgical site infections, and urinary tract infections) in the University Medical Centre Ljubljana (UMCL). A cross-sectional study was conducted in patients surveyed in the SNHPS in the UMCL using a retrospective medical chart review (RMCR) and European HAIs surveillance definitions. Sensitivity and specificity of the method used in the SNHPS using RMCR as a reference was computed for ascertainment of patients with any of the six selected types of HAIs and for individual types of HAIs. Agreement between the SNHPS and RMCR results was analyzed using Cohen's kappa coefficient. 1474 of 1742 (84.6%) patients surveyed in the SNHPS were included in RMCR. The sensitivity of the SNHPS method for detecting any of six HAIs was 90% (95% confidence interval (CI): 81%-95%) and specificity 99% (95% CI: 98%-99%). The sensitivity by type of HAI ranged from 63% (lower respiratory tract infections) to 92% (bloodstream infections). Specificity was at least 99% for all types of HAIs. Agreement between the two data collection approaches for HAIs overall was very good (κ=0.83). The overall sensitivity of SNHPS collection method for ascertaining HAIs overall was high and the specificity was very high. This suggests that the estimated prevalence of HAIs in the SNHPS was credible.

  4. Calculation of the lateral-dynamic stability of aircraft

    NASA Technical Reports Server (NTRS)

    Raikh, A

    1952-01-01

    Graphs and formulas are given with the aid of which all the aerodynamic coefficients required for computing the lateral dynamic stability can be determined. A number of numerical examples are given for obtaining the stability derivatives and solving the characteristic-stability equation. Approximate formulas are derived with the aid of which rapid preliminary computations may be made and the stability coefficients corrected for certain modifications of the airplane. A derivation of the lateral-dynamic-stability equations is included.

  5. Computationally Efficient Modeling of Hydrocarbon Oxidation Chemistry and Flames Using Constituents and Species

    DTIC Science & Technology

    2012-02-10

    1 Nc X l Dil bNl = 1 Nc X l Dil ÃX k bClkNk! . (19) Finally, it is necessary to compute, from the Dil coefficients, a global diffusion coefficient...exists a mole fraction XCk such that Nk ’ NcXCk. Therefore, X l,k Dil bClk XCk = 1 Nc X l Dil bNl = 1 Nc X l Dil ÃX k bClkNk! . (20) Finally, it is

  6. Transport, biodegradation and isotopic fractionation of chlorinated ethenes: modeling and parameter estimation methods

    NASA Astrophysics Data System (ADS)

    Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez

    2005-01-01

    An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.

  7. Plumbing the depths of batholiths

    USGS Publications Warehouse

    Zen, E.-A.

    1989-01-01

    Knowledge of the pressure of consolidation of a pluton and the pressure-time history of magmatic evolution should allow better understanding of the tectonic and thermal history of the crust. Available methods to estimate pressures of plutons are mainly for those of consolidation. These are either extrinsic, based on geological context, or intrinsic, based on mineral texture, mineral assemblage, fluid inclusions, mineral inclusions, apparent cooling rates, and mineral chemistry. The methods of lattice-dimension matching of mineral inclusions and of detailed chemistry for zoned minerals could lead to pressure-time reconstructions. Future barometers based on mineral chemistry should use atomic species that have low diffusion coefficients and whose values are not sensitive to computational schemes and cumulative analytical errors. Aluminum and silicon in coexisting hornblende, biotite, pyroxene, plagioclase, or garnet are reasonable candidate phases for barometry. -from Author

  8. Fiber shape effects on metal matrix composite behavior

    NASA Technical Reports Server (NTRS)

    Brown, H. C.; Lee, H.-J.; Chamis, C. C.

    1992-01-01

    The effects of different fiber shapes on the behavior of a SiC/Ti-15 metal matrix composite is computationally simulated. A three-dimensional finite element model consisting of a group of nine unidirectional fibers is used in the analysis. The model is employed to represent five different fiber shapes: a circle, an ellipse, a kidney, and two different cross shapes. The distribution of microstresses and the composite material properties, such as moduli, coefficients of thermal expansion, and Poisson's ratios, are obtained from the finite element analysis for the various fiber shapes. Comparisons of these results are used to determine the sensitivity of the composite behavior to the different fiber shapes and assess their potential benefits. No clear benefits result from different fiber shapes though there are some increases/decreases in isolated properties.

  9. Determination of stream reaeration coefficients by use of tracers

    USGS Publications Warehouse

    Kilpatrick, F.A.; Rathbun, R.E.; Yotsukura, N.; Parker, G.W.; DeLong, L.L.

    1987-01-01

    Stream reaeration is the physical absorption of oxygen from the atmosphere by a flowing stream. This is the primary process by which a stream replenishes the oxygen consumed in the biodegradation of organic wastes.Prior to 1965, reaeration rate coefficients could be estimated only by indirect methods. In 1965, a direct method of measuring stream reaeration coefficients was developed in which a radioactive tracer gas was injected into a stream--the tracer gas being desorbed from the stream inversely to how oxygen would be absorbed. The technique has since been modified by substituting hydrocarbon gases for the radioactive tracer gas.This manual describes the slug-injection and constant-rate injection methods of performing gas-tracer desorption measurements. Emphasis is on the use of rhodamine WT dye as a relatively conservative tracer and propane as the nonconservative gas tracer, on planning field tests, methods of injection, sampling and analysis, and computational techniques to compute desorption and reaeration coefficients.

  10. Calculation of open and closed system elastic coefficients for multicomponent solids

    NASA Astrophysics Data System (ADS)

    Mishin, Y.

    2015-06-01

    Thermodynamic equilibrium in multicomponent solids subject to mechanical stresses is a complex nonlinear problem whose exact solution requires extensive computations. A few decades ago, Larché and Cahn proposed a linearized solution of the mechanochemical equilibrium problem by introducing the concept of open system elastic coefficients [Acta Metall. 21, 1051 (1973), 10.1016/0001-6160(73)90021-7]. Using the Ni-Al solid solution as a model system, we demonstrate that open system elastic coefficients can be readily computed by semigrand canonical Monte Carlo simulations in conjunction with the shape fluctuation approach. Such coefficients can be derived from a single simulation run, together with other thermodynamic properties needed for prediction of compositional fields in solid solutions containing defects. The proposed calculation approach enables streamlined solutions of mechanochemical equilibrium problems in complex alloys. Second order corrections to the linear theory are extended to multicomponent systems.

  11. Comparison of monoenergetic photon organ dose rate coefficients for stylized and voxel phantoms submerged in air

    DOE PAGES

    Bellamy, Michael B.; Hiller, Mauritius M.; Dewji, Shaheen A.; ...

    2016-02-01

    As part of a broader effort to calculate effective dose rate coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, soil or water, age-specific stylized phantoms have been employed to determine dose coefficients relating dose rate to organs and tissues in the body. In this article, dose rate coefficients computed using the International Commission on Radiological Protection reference adult male voxel phantom are compared with values computed using the Oak Ridge National Laboratory adult male stylized phantom in an air submersion exposure geometry. Monte Carlo calculations for both phantoms were performed for monoenergetic source photonsmore » in the range of 30 keV to 5 MeV. Furthermore, these calculations largely result in differences under 10 % for photon energies above 50 keV, and it can be expected that both models show comparable results for the environmental sources of radionuclides.« less

  12. Comparison of monoenergetic photon organ dose rate coefficients for stylized and voxel phantoms submerged in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellamy, Michael B.; Hiller, Mauritius M.; Dewji, Shaheen A.

    As part of a broader effort to calculate effective dose rate coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, soil or water, age-specific stylized phantoms have been employed to determine dose coefficients relating dose rate to organs and tissues in the body. In this article, dose rate coefficients computed using the International Commission on Radiological Protection reference adult male voxel phantom are compared with values computed using the Oak Ridge National Laboratory adult male stylized phantom in an air submersion exposure geometry. Monte Carlo calculations for both phantoms were performed for monoenergetic source photonsmore » in the range of 30 keV to 5 MeV. Furthermore, these calculations largely result in differences under 10 % for photon energies above 50 keV, and it can be expected that both models show comparable results for the environmental sources of radionuclides.« less

  13. [Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].

    PubMed

    Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin

    2018-04-01

    Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.

  14. The use of computational thermodynamics for the determination of surface tension and Gibbs-Thomson coefficient of multicomponent alloys

    NASA Astrophysics Data System (ADS)

    Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.

    2018-04-01

    The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.

  15. Local texture descriptors for the assessment of differences in diffusion magnetic resonance imaging of the brain.

    PubMed

    Thomsen, Felix Sebastian Leo; Delrieux, Claudio Augusto; de Luis-García, Rodrigo

    2017-03-01

    Descriptors extracted from magnetic resonance imaging (MRI) of the brain can be employed to locate and characterize a wide range of pathologies. Scalar measures are typically derived within a single-voxel unit, but neighborhood-based texture measures can also be applied. In this work, we propose a new set of descriptors to compute local texture characteristics from scalar measures of diffusion tensor imaging (DTI), such as mean and radial diffusivity, and fractional anisotropy. We employ weighted rotational invariant local operators, namely standard deviation, inter-quartile range, coefficient of variation, quartile coefficient of variation and skewness. Sensitivity and specificity of those texture descriptors were analyzed with tract-based spatial statistics of the white matter on a diffusion MRI group study of elderly healthy controls, patients with mild cognitive impairment (MCI), and mild or moderate Alzheimer's disease (AD). In addition, robustness against noise has been assessed with a realistic diffusion-weighted imaging phantom and the contamination of the local neighborhood with gray matter has been measured. The new texture operators showed an increased ability for finding formerly undetected differences between groups compared to conventional DTI methods. In particular, the coefficient of variation, quartile coefficient of variation, standard deviation and inter-quartile range of the mean and radial diffusivity detected significant differences even between previously not significantly discernible groups, such as MCI versus moderate AD and mild versus moderate AD. The analysis provided evidence of low contamination of the local neighborhood with gray matter and high robustness against noise. The local operators applied here enhance the identification and localization of areas of the brain where cognitive impairment takes place and thus indicate them as promising extensions in diffusion MRI group studies.

  16. Improving gross count gamma-ray logging in uranium mining with the NGRS probe

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Pérot, B.; Ma, J.-L.; Toubon, H.; Dubille-Auchère, A.

    2018-01-01

    AREVA Mines and the Nuclear Measurement Laboratory of CEA Cadarache are collaborating to improve the sensitivity and precision of uranium concentration measurement by means of gamma ray logging. The determination of uranium concentration in boreholes is performed with the Natural Gamma Ray Sonde (NGRS) based on a NaI(Tl) scintillation detector. The total gamma count rate is converted into uranium concentration using a calibration coefficient measured in concrete blocks with known uranium concentration in the AREVA Mines calibration facility located in Bessines, France. Until now, to take into account gamma attenuation in a variety of boreholes diameters, tubing materials, diameters and thicknesses, filling fluid densities and compositions, a semi-empirical formula was used to correct the calibration coefficient measured in Bessines facility. In this work, we propose to use Monte Carlo simulations to improve gamma attenuation corrections. To this purpose, the NGRS probe and the calibration measurements in the standard concrete blocks have been modeled with MCNP computer code. The calibration coefficient determined by simulation, 5.3 s-1.ppmU-1 ± 10%, is in good agreement with the one measured in Bessines, 5.2 s-1.ppmU-1. Based on the validated MCNP model, several parametric studies have been performed. For instance, the rock density and chemical composition proved to have a limited impact on the calibration coefficient. However, gamma self-absorption in uranium leads to a nonlinear relationship between count rate and uranium concentration beyond approximately 1% of uranium weight fraction, the underestimation of the uranium content reaching more than a factor 2.5 for a 50 % uranium weight fraction. Next steps will concern parametric studies with different tubing materials, diameters and thicknesses, as well as different borehole filling fluids representative of real measurement conditions.

  17. Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.

    PubMed

    Wu, Panpan; Xia, Kewen; Yu, Hengyong

    2016-11-01

    Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Three-dimensional diffuse optical mammography with ultrasound localization in a human subject

    NASA Astrophysics Data System (ADS)

    Holboke, Monica J.; Tromberg, Bruce J.; Li, Xingde; Shah, Natasha; Fishkin, Joshua B.; Kidney, D.; Butler, J.; Chance, Britton; Yodh, Arjun G.

    2000-04-01

    We describe an approach that combines clinical ultrasound and photon migration techniques to enhance the sensitivity and information content of diffuse optical tomography. Measurements were performed on a postmenopausal woman with a single 1.8 X 0.9 cm malignant ductal carcinoma in situ approximately 7.4 mm beneath the skin surface (UCI IRB protocol 95-563). The ultrasound-derived information about tumor geometry enabled us to segment the breast tissue into tumor and background regions. Optical data was obtained with a multifrequency, multiwavelength hand-held frequency-domain photon migration backscattering probe. The optical properties of the tumor and background were then computed using the ultrasound-derived geometrical constraints. An iterative perturbative approach, using parallel processing, provided quantitative information about scattering and absorption simultaneously with the ability to incorporate and resolve complex boundary conditions and geometries. A three to four fold increase in the tumor absorption coefficient and nearly 50% reduction in scattering coefficient relative to background was observed ((lambda) equals 674, 782, 803, and 849 nm). Calculations of the mean physiological parameters reveal fourfold greater tumor total hemoglobin concentration [Hbtot] than normal breast (67 (mu) M vs 16 (mu) M) and tumor hemoglobin oxygen saturation (SOx) values of 63% (vs 73% and 68% in the region surrounding the tumor and the opposite normal tissue, respectively). Comparison of semi-infinite to heterogeneous models shows superior tumor/background contrast for the latter in both absorption and scattering. Sensitivity studies assessing the impact of tumor size and refractive index assumptions, as well as scan direction, demonstrate modest effects on recovered properties.

  19. Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.

    PubMed

    Tripathy, R K; Dandapat, S

    2016-06-01

    The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.

  20. Sensitivity study for a remotely piloted microwave-powered sailplane used as a high-altitude observation

    NASA Technical Reports Server (NTRS)

    Turriziani, R. V.

    1979-01-01

    The sensitivity of several performance characteristics of a proposed design for a microwave-powered, remotely piloted, high-altitude sailplane to changes in independently varied design parameters was investigated. Results were expressed as variations from baseline values of range, final climb altitude and onboard storage of radiated energy. Calculated range decreased with increases in either gross weight or parasite drag coefficient; it also decreased with decreases in lift coefficient, propeller efficiency, or microwave beam density. The sensitivity trends for range and final climb altitude were very similar. The sensitivity trends for stored energy were reversed from those for range, except for decreasing microwave beam density. Some study results for single parameter variations were combined to estimate the effect of the simultaneous variation of several parameters: for two parameters, this appeared to give reasonably accurate results.

  1. First-order exchange coefficient coupling for simulating surface water-groundwater interactions: Parameter sensitivity and consistency with a physics-based approach

    USGS Publications Warehouse

    Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.

    2009-01-01

    Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.

  2. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease.

    PubMed

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  3. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less

  4. Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1993-01-01

    Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.

  5. Renormalization group theory outperforms other approaches in statistical comparison between upscaling techniques for porous media

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.

    2017-09-01

    Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.

  6. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  7. Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Liang, Dongfang; Xiao, Yang

    2018-04-01

    The traditional Alternating Direction Implicit (ADI) scheme has been proven to be incapable of modelling trans-critical flows. Its inherent lack of shock-capturing capability often results in spurious oscillations and computational instabilities. However, the ADI scheme is still widely adopted in flood modelling software, and various special treatments have been designed to stabilise the computation. Modification of the Boussinesq coefficient to adjust the amount of fluid inertia is a numerical treatment that allows the ADI scheme to be applicable to rapid flows. This study comprehensively examines the impact of this numerical treatment over a range of flow conditions. A shock-capturing TVD-MacCormack model is used to provide reference results. For unsteady flows over a frictionless bed, such as idealised dam-break floods, the results suggest that an increase in the value of the Boussinesq coefficient reduces the amplitude of the spurious oscillations. The opposite is observed for steady rapid flows over a frictional bed. Finally, a two-dimensional urban flooding phenomenon is presented, involving unsteady flow over a frictional bed. The results show that increasing the value of the Boussinesq coefficient can significantly reduce the numerical oscillations and reduce the predicted area of inundation. In order to stabilise the ADI computations, the Boussinesq coefficient could be judiciously raised or lowered depending on whether the rapid flow is steady or unsteady and whether the bed is frictional or frictionless. An increase in the Boussinesq coefficient generally leads to overprediction of the propagating speed of the flood wave over a frictionless bed, but the opposite is true when bed friction is significant.

  8. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  9. Prediction of pharmacologically induced baroreflex sensitivity from local time and frequency domain indices of R-R interval and systolic blood pressure signals obtained during deep breathing.

    PubMed

    Arica, Sami; Firat Ince, N; Bozkurt, Abdi; Tewfik, Ahmed H; Birand, Ahmet

    2011-07-01

    Pharmacological measurement of baroreflex sensitivity (BRS) is widely accepted and used in clinical practice. Following the introduction of pharmacologically induced BRS (p-BRS), alternative assessment methods eliminating the use of drugs were in the center of interest of the cardiovascular research community. In this study we investigated whether p-BRS using phenylephrine injection can be predicted from non-pharmacological time and frequency domain indices computed from electrocardiogram (ECG) and blood pressure (BP) data acquired during deep breathing. In this scheme, ECG and BP data were recorded from 16 subjects in a two-phase experiment. In the first phase the subjects performed irregular deep breaths and in the second phase the subjects received phenylephrine injection. From the first phase of the experiment, a large pool of predictors describing the local characteristic of beat-to-beat interval tachogram (RR) and systolic blood pressure (SBP) were extracted in time and frequency domains. A subset of these indices was selected using twelve subjects with an exhaustive search fused with a leave one subject out cross validation procedure. The selected indices were used to predict the p-BRS on the remaining four test subjects. A multivariate regression was used in all prediction steps. The algorithm achieved best prediction accuracy with only two features extracted from the deep breathing data, one from the frequency and the other from the time domain. The normalized L2-norm error was computed as 22.9% and the correlation coefficient was 0.97 (p=0.03). These results suggest that the p-BRS can be estimated from non-pharmacological indices computed from ECG and invasive BP data related to deep breathing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Co-Ordination Compounds as Sensitizers for Percussion Cap Compositions

    DTIC Science & Technology

    1949-01-01

    table. TABLE III Time elapsed (hours) Mixture Sensitivity* (inches/ £ lb.) Ballistic Pendulum » Power coefficient C. of V. of trace lengths...dimension C = 50-52. The power co-efficient is obtained by dividing the average trace length for 10 of the caps under trial by the average trace ...resulting in a high C. of V. The trace lengths as measured were as follows: 8.25, 8.30, 4.55, 10.65, 9.55, 9.0C, 8.46, 8.42, 8.21, 8.34 inches. The

  11. Measurement of the ferric diffusion coefficient in agarose and gelatine gels by utilization of the evolution of a radiation induced edge as reflected in relaxation rate images.

    PubMed

    Pedersen, T V; Olsen, D R; Skretting, A

    1997-08-01

    A method has been developed to determine the diffusion coefficients of ferric ions in ferrous sulphate doped gels. A radiation induced edge was created in the gel, and two spin-echo sequences were used to acquire a pair of images of the gel at different points of time. For each of these image pairs, a longitudinal relaxation rate image was derived. From profiles through these images, the standard deviations of the Gaussian functions that characterize diffusion were determined. These data provided the basis for the determination of the ferric diffusion coefficients by two different methods. Simulations indicate that the use of single spin-echo images in this procedure may in some cases lead to a significant underestimation of the diffusion coefficient. The technique was applied to different agarose and gelatine gels that were prepared, irradiated and imaged simultaneously. The results indicate that the diffusion coefficient is lower in a gelatine gel than in an agarose gel. Addition of xylenol orange to a gelatine gel lowers the diffusion coefficient from 1.45 to 0.81 mm2 h-1, at the cost of significantly lower Rl sensitivity. The addition of benzoic acid to the latter gel did not increase the Rl sensitivity.

  12. Development of kinetic analysis technique for PACS management and a screening examination in dynamic radiography

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    The purpose of this study was to develop of kinetic analysis method for PACS management and computer-aided diagnosis. We obtained dynamic chest radiographs (512x512, 8bit, 4fps, and 1344x1344, 12bit, 3fps) of five healthy volunteers during respiration using an I.I. system twice, and one healthy volunteer using dynamic FPD system. Optical flows of images were obtained using customized block matching technique, and were divided into a direction, and transformed into the RGB color. Density was determined by the sum pixel length of movement during respiration phase. The made new static image was defined as the "kinetic map". The evaluation of patient's collation was performed with a template matching to the three colors. The same person's each correlation value and similar-coefficient which is defined in this study were statistically significant high (P<0.01). We used the artificial neural network (ANN) for the judgment of the same person. Five volunteers were divided into two groups, three volunteers and two volunteers became a training signal and unknown signal. Correlation value and similar-coefficient was used for the input signal, and ANN was designed so that the same person's probability might be outputted. The average of the specificity of the unknown signal obtained 98.2%. The kinetic map including the imitation tumor was used for the simulation. The tumor was detected by temporal subtraction of kinetic map, and then the superior sensitivity was obtained. Our analysis method was useful in risk management and computer-aided diagnosis.

  13. Sensitivity of Rabbit Ventricular Action Potential and Ca2+ Dynamics to Small Variations in Membrane Currents and Ion Diffusion Coefficients

    PubMed Central

    Lo, Yuan Hung; Peachey, Tom; Abramson, David; McCulloch, Andrew

    2013-01-01

    Little is known about how small variations in ionic currents and Ca2+ and Na+ diffusion coefficients impact action potential and Ca2+ dynamics in rabbit ventricular myocytes. We applied sensitivity analysis to quantify the sensitivity of Shannon et al. model (Biophys. J., 2004) to 5%–10% changes in currents conductance, channels distribution, and ion diffusion in rabbit ventricular cells. We found that action potential duration and Ca2+ peaks are highly sensitive to 10% increase in L-type Ca2+ current; moderately influenced by 10% increase in Na+-Ca2+ exchanger, Na+-K+ pump, rapid delayed and slow transient outward K+ currents, and Cl− background current; insensitive to 10% increases in all other ionic currents and sarcoplasmic reticulum Ca2+ fluxes. Cell electrical activity is strongly affected by 5% shift of L-type Ca2+ channels and Na+-Ca2+ exchanger in between junctional and submembrane spaces while Ca2+-activated Cl−-channel redistribution has the modest effect. Small changes in submembrane and cytosolic diffusion coefficients for Ca2+, but not in Na+ transfer, may alter notably myocyte contraction. Our studies highlight the need for more precise measurements and further extending and testing of the Shannon et al. model. Our results demonstrate usefulness of sensitivity analysis to identify specific knowledge gaps and controversies related to ventricular cell electrophysiology and Ca2+ signaling. PMID:24222910

  14. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  15. Flow immune photoacoustic sensor for real-time and fast sampling of trace gases

    NASA Astrophysics Data System (ADS)

    Petersen, Jan C.; Balslev-Harder, David; Pelevic, Nikola; Brusch, Anders; Persijn, Stefan; Lassen, Mikael

    2018-02-01

    A photoacoustic (PA) sensor for fast and real-time gas sensing is demonstrated. The PA cell has been designed for flow noise immunity using computational fluid dynamics (CFD) analysis. PA measurements were conducted at different flow rates by exciting molecular C-H stretch vibrational bands of hexane (C6H14) in clean air at 2950cm-1 (3.38 μm) with a custom made mid-infrared interband cascade laser (ICL). The PA sensor will contribute to solve a major problem in a number of industries using compressed air by the detection of oil contaminants in high purity compressed air. We observe a (1σ, standard deviation) sensitivity of 0.4 +/-0.1 ppb (nmol/mol) for hexane in clean air at flow rates up to 2 L/min, corresponding to a normalized noise equivalent absorption (NNEA) coefficient of 2.5×10-9 W cm-1 Hz1/2, thus demonstrating high sensitivity and fast and real-time gas analysis. The PA sensor is not limited to molecules with C-H stretching modes, but can be tailored to measure any trace gas by simply changing the excitation wavelength (i.e. the laser source) making it useful for many different applications where fast and sensitive trace gas measurements are needed.

  16. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  17. Numerical and experimental evaluations of the flow past nested chevrons

    NASA Technical Reports Server (NTRS)

    Foss, J. F.; Foss, J. K.; Spalart, P. R.

    1989-01-01

    An effort is made to contribute to the development of CFD by relating the successful use of vortex dynamics in the computation of the pressure drop past a planar array of chevron-shaped obstructions. An ensemble of results was used to compute the loss coefficient k, stimulating an experimental program for the assessment of the measured loss coefficient for the same geometry. The most provocative result of this study has been the representation of kinetic energy production in terms of vorticity source terms.

  18. Calculation of laminar heating rates on three-dimensional configurations using the axisymmetric analogue

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II

    1980-01-01

    A theoretical method was developed for computing approximate laminar heating rates on three dimensional configurations at angle of attack. The method is based on the axisymmetric analogue which is used to reduce the three dimensional boundary layer equations along surface streamlines to an equivalent axisymmetric form by using the metric coefficient which describes streamline divergence (or convergence). The method was coupled with a three dimensional inviscid flow field program for computing surface streamline paths, metric coefficients, and boundary layer edge conditions.

  19. Evaluation of Rock Joint Coefficients

    NASA Astrophysics Data System (ADS)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    A computer method for evaluation of rock joint coefficients is described and several applications are presented. The method is based on two absolute numerical indicators that are formed by means of the Fourier replicas of rock joint profiles. The first indicator quantifies the vertical depth of profiles and the second indicator classifies wavy character of profiles. The absolute indicators have replaced the formerly used relative indicators that showed some artificial behavior in some cases. This contribution is focused on practical computations testing the functionality of the newly introduced indicators.

  20. Passive Nosetip Technology (PANT) Program. Volume 17. Computer User’s Manual: Erosion Shape (EROS) Computer Code

    DTIC Science & Technology

    1974-12-01

    as a series of sections, eacN represent- ing one pressure and each preceding the corresponding pressure group of the sur- face thermochemistry deck...groups together make up the surface thermochemistry deck. Within each pressure group the transfer coefficient values will be ordered. Within each transfer...values in each pressure group may not exceed 5 but may be only 1. If no kinetics effects are to be considered a transfer coefficient of zero is acceptable

  1. Numerical approaches to combustion modeling. Progress in Astronautics and Aeronautics. Vol. 135

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oran, E.S.; Boris, J.P.

    1991-01-01

    Various papers on numerical approaches to combustion modeling are presented. The topics addressed include; ab initio quantum chemistry for combustion; rate coefficient calculations for combustion modeling; numerical modeling of combustion of complex hydrocarbons; combustion kinetics and sensitivity analysis computations; reduction of chemical reaction models; length scales in laminar and turbulent flames; numerical modeling of laminar diffusion flames; laminar flames in premixed gases; spectral simulations of turbulent reacting flows; vortex simulation of reacting shear flow; combustion modeling using PDF methods. Also considered are: supersonic reacting internal flow fields; studies of detonation initiation, propagation, and quenching; numerical modeling of heterogeneous detonations, deflagration-to-detonationmore » transition to reactive granular materials; toward a microscopic theory of detonations in energetic crystals; overview of spray modeling; liquid drop behavior in dense and dilute clusters; spray combustion in idealized configurations: parallel drop streams; comparisons of deterministic and stochastic computations of drop collisions in dense sprays; ignition and flame spread across solid fuels; numerical study of pulse combustor dynamics; mathematical modeling of enclosure fires; nuclear systems.« less

  2. HEALER: homomorphic computation of ExAct Logistic rEgRession for secure rare disease variants analysis in GWAS

    PubMed Central

    Wang, Shuang; Zhang, Yuchen; Dai, Wenrui; Lauter, Kristin; Kim, Miran; Tang, Yuzhe; Xiong, Hongkai; Jiang, Xiaoqian

    2016-01-01

    Motivation: Genome-wide association studies (GWAS) have been widely used in discovering the association between genotypes and phenotypes. Human genome data contain valuable but highly sensitive information. Unprotected disclosure of such information might put individual’s privacy at risk. It is important to protect human genome data. Exact logistic regression is a bias-reduction method based on a penalized likelihood to discover rare variants that are associated with disease susceptibility. We propose the HEALER framework to facilitate secure rare variants analysis with a small sample size. Results: We target at the algorithm design aiming at reducing the computational and storage costs to learn a homomorphic exact logistic regression model (i.e. evaluate P-values of coefficients), where the circuit depth is proportional to the logarithmic scale of data size. We evaluate the algorithm performance using rare Kawasaki Disease datasets. Availability and implementation: Download HEALER at http://research.ucsd-dbmi.org/HEALER/ Contact: shw070@ucsd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26446135

  3. Recursive formulas for the partial fraction expansion of a rational function with multiple poles.

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.

    1973-01-01

    The coefficients in the partial fraction expansion considered are given by Heaviside's formula. The evaluation of the coefficients involves the differential of a quotient of two polynomials. A simplified approach for the evaluation of the coefficients is discussed. Leibniz rule is applied and a recurrence formula is derived. A coefficient can also be determined from a system of simultaneous equations. Practical methods for the performance of the computational operations involved in both approaches are considered.

  4. Symbolic computation of the Birkhoff normal form in the problem of stability of the triangular libration points

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. I.

    2008-05-01

    The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.

  5. Development and Validation of a New Fallout Transport Method Using Variable Spectral Winds

    NASA Astrophysics Data System (ADS)

    Hopkins, Arthur Thomas

    A new method has been developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds, to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using specgtral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud. Further validation was performed by comparing computed and actual trajectories of a high explosive dust cloud (DIRECT COURSE). Using an error propagation formula, it was determined that uncertainties in spectral wind components produce less than four percent of the total dose rate variance. In summary, this research demonstrated the feasibility of using spectral coefficients for fallout transport calculations, developed a two-step smearing model to treat variable winds, and showed that uncertainties in spectral winds do not contribute significantly to the error in computed dose rate.

  6. Evaluation of the normal-to-diseased apparent diffusion coefficient ratio as an indicator of prostate cancer aggressiveness.

    PubMed

    Lebovici, Andrei; Sfrangeu, Silviu A; Feier, Diana; Caraiani, Cosmin; Lucan, Ciprian; Suciu, Mihai; Elec, Florin; Iacob, Gheorghita; Buruian, Mircea

    2014-05-10

    We tested the feasibility of a simple method for assessment of prostate cancer (PCa) aggressiveness using diffusion-weighted magnetic resonance imaging (MRI) to calculate apparent diffusion coefficient (ADC) ratios between prostate cancer and healthy prostatic tissue. The requirement for institutional review board approval was waived. A set of 20 standardized core transperineal saturation biopsy specimens served as the reference standard for placement of regions of interest on ADC maps in tumorous and normal prostatic tissue of 22 men with PCa (median Gleason score: 7; range, 6-9). A total of 128 positive sectors were included for evaluation. Two diagnostic ratios were computed between tumor ADCs and normal sector ADCs: the ADC peripheral ratio (the ratio between tumor ADC and normal peripheral zone tissue, ADC-PR), and the ADC central ratio (the ratio between tumor ADC and normal central zone tissue, ADC-CR). The performance of the two ratios in detecting high-risk tumor foci (Gleason 8 and 9) was assessed using the area under the receiver operating characteristic curve (AUC). Both ADC ratios presented significantly lower values in high-risk tumors (0.48 ± 0.13 for ADC-CR and 0.40 ± 0.09 for ADC-PR) compared with low-risk tumors (0.66 ± 0.17 for ADC-CR and 0.54 ± 0.09 for ADC-PR) (p < 0.001) and had better diagnostic performance (ADC-CR AUC = 0.77, sensitivity = 82.2%, specificity = 66.7% and ADC-PR AUC = 0.90, sensitivity = 93.7%, specificity = 80%) than stand-alone tumor ADCs (AUC of 0.75, sensitivity = 72.7%, specificity = 70.6%) for identifying high-risk lesions. The ADC ratio as an intrapatient-normalized diagnostic tool may be better in detecting high-grade lesions compared with analysis based on tumor ADCs alone, and may reduce the rate of biopsies.

  7. Reports of investigations on: Derivation of an infinite-dilution activity coefficient model and application to two-component vapor/liquid equilibria data: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roper, V.P.; Kobayashi, R.

    1988-02-01

    Infinite-dilution fugacity coefficients were obtained for the system fluorene/phenanthrene at thirteen temperatures by fitting total pressure across the entire mole fraction range by a computer routine. A thermodynamically consistent routine, that allowed for both positive and negative pressure deviations from the ideal values, was used to correlate data over the full mole fraction range from 0 to 1. The four-suffix Margules activity coefficient model without modification essentially served this purpose since total pressures and total pressure derivatives with respect to mole fraction were negligible compared to pressure measurement precision. The water/ethanol system and binary systems comprised of aniline, chlorobenzene, acetonitrilemore » and other polar compounds were fit for total pressure across the entire mole fraction range for binary Vapor-Liquid-Equilbria (VLE) using the rigorous, thermodynamically consistent Gibbs-Duhem Relation derived by Ibl and Dodge. Data correlation was performed using a computer least squares procedure. Infinite-dilution fugacity coefficients were obtained using a modified Margules activity coefficient model.« less

  8. Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.

  9. A fast collocation method for a variable-coefficient nonlocal diffusion model

    NASA Astrophysics Data System (ADS)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  10. Einstein coefficients and oscillator strengths for low lying state of CO molecules

    NASA Astrophysics Data System (ADS)

    Swer, S.; Syiemiong, A.; Ram, M.; Jha, A. K.; Saxena, A.

    2018-04-01

    Einstein Coefficients and Oscillator Strengths for different state of CO molecule have been calculated using LEROY'S LEVEL program and MOLCAS ab initio code. Using the wave function derived from Morse potential and transition dipole moment obtained from ab initio calculation, The potential energy functions were computed for these states using the spectroscopic constants. The Morse potential of these states and electronic transition dipole moment of the transition calculated in a recent ab initio study have been used in LEVEL program to produce transition dipole matrix element for a large number of bands. Einstein Coefficients have also been used to compute the radiative lifetimes of several vibrational levels and the calculated values are compared with other theoretical results and experimental values.

  11. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  12. A new method for gravity field recovery based on frequency analysis of spherical harmonics

    NASA Astrophysics Data System (ADS)

    Cai, Lin; Zhou, Zebing

    2017-04-01

    All existing methods for gravity field recovery are mostly based on the space-wise and time-wise approach, whose core processes are constructing the observation equations and solving them by the least square method. It's should be pointed that the least square method means the approximation. On the other hand, we can directly and precisely obtain the coefficients of harmonics by computing the Fast Fourier Transform (FFT) when we do 1-D data (time series) analysis. So the question whether we directly and precisely obtain the coefficients of spherical harmonic by computing 2-D FFT of measurements of satellite gravity mission is of great significance, since this may guide us to a new understanding of the signal components of gravity field and make us determine it quickly by taking advantage of FFT. Like the 1-D data analysis, the 2-D FFT of measurements of satellite can be computed rapidly. If we can determine the relationship between spherical harmonics and 2-D Fourier frequencies and the transfer function from measurements to spherical coefficients, the question mentioned above can be solved. So the objective of this research project is to establish a new method based on frequency analysis of spherical harmonic, which directly compute the confidents of spherical harmonic of gravity field, which is differ from recovery by least squares. There is a one to one correspondence between frequency spectrum and the time series in 1-D FFT. The 2-D FFT has a similar relationship to 1-D FFT. Owing to the fact that any degree or order (higher than one) of spherical function has multi frequencies and these frequencies may be aliased. Fortunately, the elements and ratio of these frequencies of spherical function can be determined, and we can compute the coefficients of spherical function from 2-D FFT. This relationship can be written as equations and equivalent to a matrix, which is solid and can be derived in advance. Until now the relationship has be determined. Some preliminary results, which only compute lower degree spherical harmonics, indicates that the difference between the input (EGM2008) and output (coefficients from recovery) is smaller than 5E-17, while the minimal precision of computer software (Matlab) is 2.2204E-16.

  13. Evaluation of Fourier transform coefficients for the diagnosis of rheumatoid arthritis from diffuse optical tomography images

    NASA Astrophysics Data System (ADS)

    Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.

    2013-03-01

    We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.

  14. Computational flow predictions for hypersonic drag devices

    NASA Technical Reports Server (NTRS)

    Tokarcik, Susan; Venkatapathy, Ethiraj; Candler, Graham; Palmer, Grant

    1991-01-01

    The effectiveness of two types of hypersonic decelerators are computationally examined: mechanically deployable flares and inflatable ballutes. CFD is used to predict the flowfield around a solid rocket motor (SRM) with a deployed decelerator. The computations are performed with an ideal gas solver using an effective specific heat ratio of 1.15. The surface pressure coefficients, the drag, and the extent of the compression corner separation zone predicted by the ideal gas solver compare well with those predicted by the nonequilibrium solver. The ideal gas solver is computationally inexpensive and is shown to be well suited for preliminary design studies. The computed solutions are used to determine the size and shape of the decelerator that are required to achieve a drag coefficient of 5 in order to assure that the SRM will splash down in the Pacific Ocean. Heat transfer rates to the SRM and the decelerators are predicted to estimate the amount of thermal protection required.

  15. Evaluation of alternative approaches for measuring n-octanol/water partition coefficients for methodologically challenging chemicals (MCCs)

    EPA Science Inventory

    Measurements of n-octanol/water partition coefficients (KOW) for highly hydrophobic chemicals, i.e., greater than 108, are extremely difficult and are rarely made, in part because the vanishingly small concentrations in the water phase require extraordinary analytical sensitivity...

  16. Association between power law coefficients of the anatomical noise power spectrum and lesion detectability in breast imaging modalities

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Abbey, Craig K.; Boone, John M.

    2013-03-01

    Previous research has demonstrated that a parameter extracted from a power function fit to the anatomical noise power spectrum, β, may be predictive of breast mass lesion detectability in x-ray based medical images of the breast. In this investigation, the value of β was compared with a number of other more widely used parameters, in order to determine the relationship between β and these other parameters. This study made use of breast CT data sets, acquired on two breast CT systems developed in our laboratory. A total of 185 breast data sets in 183 women were used, and only the unaffected breast was used (where no lesion was suspected). The anatomical noise power spectrum computed from two-dimensional region of interests (ROIs), was fit to a power function (NPS(f) = α f-β), and the exponent parameter (β) was determined using log/log linear regression. Breast density for each of the volume data sets was characterized in previous work. The breast CT data sets analyzed in this study were part of a previous study which evaluated the receiver operating characteristic (ROC) curve performance using simulated spherical lesions and a pre-whitened matched filter computer observer. This ROC information was used to compute the detectability index as well as the sensitivity at 95% specificity. The fractal dimension was computed from the same ROIs which were used for the assessment of β. The value of β was compared to breast density, detectability index, sensitivity, and fractal dimension, and the slope of these relationships was investigated to assess statistical significance from zero slope. A statistically significant non-zero slope was considered to be a positive association in this investigation. All comparisons between β and breast density, detectability index, sensitivity at 95% specificity, and fractal dimension demonstrated statistically significant association with p < 0.001 in all cases. The value of β was also found to be associated with patient age and breast diameter, parameters both related to breast density. In all associations between other parameters, lower values of β were associated with increased breast cancer detection performance. Specifically, lower values of β were associated with lower breast density, higher detectability index, higher sensitivity, and lower fractal dimension values. While causality was not and probably cannot be demonstrated, the strong, statistically significant association between the β metric and the other more widely used parameters suggest that β may be considered as a surrogate measure for breast cancer detection performance. These findings are specific to breast parenchymal patterns and mass lesions only.

  17. An Inviscid Computational Study of the Space Shuttle Orbiter and Several Damaged Configurations

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Merski, N. Ronald (Technical Monitor)

    2004-01-01

    Inviscid aerodynamic characteristics of the Space Shuttle Orbiter were computed in support of the Columbia Accident Investigation. The unstructured grid software FELISA was used and computations were done using freestream conditions corresponding to those in the NASA Langley 20-Inch Mach 6 CF4 tunnel test section. The angle of attack was held constant at 40 degrees. The baseline (undamaged) configuration and a large number of damaged configurations of the Orbiter were studied. Most of the computations were done on a half model. However, one set of computations was done using the full-model to study the effect of sideslip. The differences in the aerodynamic coefficients for the damaged and the baseline configurations were computed. Simultaneously with the computation reported here, tests were being done on a scale model of the Orbiter in the 20-Inch Mach 6 CF4 tunnel to measure the deltas . The present computations complemented the CF4 tunnel test, and provided aerodynamic coefficients of the Orbiter as well as its components. Further, they also provided details of the flow field.

  18. The pressure sensitivity of wrinkled B-doped nanocrystalline diamond membranes

    PubMed Central

    Drijkoningen, S.; Janssens, S. D.; Pobedinskas, P.; Koizumi, S.; Van Bael, M. K.; Haenen, K.

    2016-01-01

    Nanocrystalline diamond (NCD) membranes are promising candidates for use as sensitive pressure sensors. NCD membranes are able to withstand harsh conditions and are easily fabricated on glass. In this study the sensitivity of heavily boron doped NCD (B:NCD) pressure sensors is evaluated with respect to different types of supporting glass substrates, doping levels and membrane sizes. Higher pressure sensing sensitivities are obtained for membranes on Corning Eagle 2000 glass, which have a better match in thermal expansion coefficient with diamond compared to those on Schott AF45 glass. In addition, it is shown that larger and more heavily doped membranes are more sensitive. After fabrication of the membranes, the stress in the B:NCD films is released by the emergence of wrinkles. A better match between the thermal expansion coefficient of the NCD layer and the underlying substrate results in less stress and a smaller amount of wrinkles as confirmed by Raman spectroscopy and 3D surface imaging. PMID:27767048

  19. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  20. DCE-MRI-Derived Volume Transfer Constant (Ktrans) and DWI Apparent Diffusion Coefficient as Predictive Markers of Short- and Long-Term Efficacy of Chemoradiotherapy in Patients With Esophageal Cancer.

    PubMed

    Ye, Zhi-Min; Dai, Shu-Jun; Yan, Feng-Qin; Wang, Lei; Fang, Jun; Fu, Zhen-Fu; Wang, Yue-Zhen

    2018-01-01

    This study aimed to evaluate both the short- and long-term efficacies of chemoradiotherapy in relation to the treatment of esophageal cancer . This was achieved through the use of dynamic contrast-enhanced magnetic resonance imaging-derived volume transfer constant and diffusion weighted imaging-derived apparent diffusion coefficient . Patients with esophageal cancer were assigned into the sensitive and resistant groups based on respective efficacies in chemoradiotherapy. Dynamic contrast-enhanced magnetic resonance imaging and diffusion weighted imaging were used to measure volume transfer constant and apparent diffusion coefficient, while computed tomography was used to calculate tumor size reduction rate. Pearson correlation analyses were conducted to analyze correlation between volume transfer constant, apparent diffusion coefficient, and the tumor size reduction rate. Receiver operating characteristic curve was constructed to analyze the short-term efficacy of volume transfer constant and apparent diffusion coefficient, while Kaplan-Meier curve was employed for survival rate analysis. Cox proportional hazard model was used for the risk factors for prognosis of patients with esophageal cancer. Our results indicated reduced levels of volume transfer constant, while increased levels were observed in ADC min , ADC mean , and ADC max following chemoradiotherapy. A negative correlation was determined between ADC min , ADC mean , and ADC max , as well as in the tumor size reduction rate prior to chemoradiotherapy, whereas a positive correlation was uncovered postchemoradiotherapy. Volume transfer constant was positively correlated with tumor size reduction rate both before and after chemoradiotherapy. The 5-year survival rate of patients with esophageal cancer having high ADC min , ADC mean , and ADC max and volume transfer constant before chemoradiotherapy was greater than those with respectively lower values. According to the Cox proportional hazard model, ADC mean , clinical stage, degree of differentiation, and tumor stage were all confirmed as being independent risk factors in regard to the prognosis of patients with EC. The findings of this study provide evidence suggesting that volume transfer constant and apparent diffusion coefficient as being tools allowing for the evaluation of both the short- and long-term efficacies of chemoradiotherapy esophageal cancer treatment.

  1. A Comparative Study of the Hypoxia PET Tracers [{sup 18}F]HX4, [{sup 18}F]FAZA, and [{sup 18}F]FMISO in a Preclinical Tumor Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeters, Sarah G.J.A., E-mail: sarah.peeters@maastrichtuniversity.nl; Zegers, Catharina M.L.; Lieuwes, Natasja G.

    Purpose: Several individual clinical and preclinical studies have shown the possibility of evaluating tumor hypoxia by using noninvasive positron emission tomography (PET). The current study compared 3 hypoxia PET tracers frequently used in the clinic, [{sup 18}F]FMISO, [{sup 18}F]FAZA, and [{sup 18}F]HX4, in a preclinical tumor model. Tracer uptake was evaluated for the optimal time point for imaging, tumor-to-blood ratios (TBR), spatial reproducibility, and sensitivity to oxygen modification. Methods and Materials: PET/computed tomography (CT) images of rhabdomyosarcoma R1-bearing WAG/Rij rats were acquired at multiple time points post injection (p.i.) with one of the hypoxia tracers. TBR values were calculated, andmore » reproducibility was investigated by voxel-to-voxel analysis, represented as correlation coefficients (R) or Dice similarity coefficient of the high-uptake volume. Tumor oxygen modifications were induced by exposure to either carbogen/nicotinamide treatment or 7% oxygen breathing. Results: TBR was stabilized and maximal at 2 hours p.i. for [{sup 18}F]FAZA (4.0 ± 0.5) and at 3 hours p.i. for [{sup 18}F]HX4 (7.2 ± 0.7), whereas [{sup 18}F]FMISO showed a constant increasing TBR (9.0 ± 0.8 at 6 hours p.i.). High spatial reproducibility was observed by voxel-to-voxel comparisons and Dice similarity coefficient calculations on the 30% highest uptake volume for both [{sup 18}F]FMISO (R = 0.86; Dice coefficient = 0.76) and [{sup 18}F]HX4 (R = 0.76; Dice coefficient = 0.70), whereas [{sup 18}F]FAZA was less reproducible (R = 0.52; Dice coefficient = 0.49). Modifying the hypoxic fraction resulted in enhanced mean standardized uptake values for both [{sup 18}F]HX4 and [{sup 18}F]FAZA upon 7% oxygen breathing. Only [{sup 18}F]FMISO uptake was found to be reversible upon exposure to nicotinamide and carbogen. Conclusions: This study indicates that each tracer has its own strengths and, depending on the question to be answered, a different tracer can be put forward.« less

  2. Uncertainty Analysis for a Jet Flap Airfoil

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Cruz, Josue

    2006-01-01

    An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.

  3. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  4. Effect of Metal Artifacts on Detection of Vertical Root Fractures Using Two Cone Beam Computed Tomography Systems.

    PubMed

    Safi, Yaser; Aghdasi, Mohammad Mehdi; Ezoddini-Ardakani, Fatemeh; Beiraghi, Samira; Vasegh, Zahra

    2015-01-01

    Vertical root fracture (VRF) is common in endodontically treated teeth. Conventional and digital radiographies have limitations for detection of VRFs. Cone-beam computed tomography (CBCT) offers greater detection accuracy of VRFs in comparison with conventional radiography. This study compared the effects of metal artifacts on detection of VRFs by using two CBCT systems. Eighty extracted premolars were selected and sectioned at the level of the cemento enamel junction (CEJ). After preparation, root canals were filled with gutta-percha. Subsequently, two thirds of the root fillings were removed for post space preparation and a custom-made post was cemented into each canal. The teeth were randomly divided into two groups (n=40). In the test group, root fracture was created with Instron universal testing machine. The control teeth remained intact. CBCT scans of all teeth were obtained with either New Tom VGI or Soredex Scanora 3D. Three observers analyzed the images for detection of VRF. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for VRF detection and percentage of probable cases were calculated for each imaging system and compared using non-parametric tests considering the non-normal distribution of data. The inter-observer reproducibility was calculated using the weighted kappa coefficient. There were no statistically significant differences in sensitivity, specificity, PPV and NPV between the two CBCT systems. The effect of metal artifacts on VRF detection was not significantly different between the two CBCT systems.

  5. Osteoporosis prediction from the mandible using cone-beam computed tomography

    PubMed Central

    Al Haffar, Iyad; Khattab, Razan

    2014-01-01

    Purpose This study aimed to evaluate the use of dental cone-beam computed tomography (CBCT) in the diagnosis of osteoporosis among menopausal and postmenopausal women by using only a CBCT viewer program. Materials and Methods Thirty-eight menopausal and postmenopausal women who underwent dual-energy X-ray absorptiometry (DXA) examination for hip and lumbar vertebrae were scanned using CBCT (field of view: 13 cm×15 cm; voxel size: 0.25 mm). Slices from the body of the mandible as well as the ramus were selected and some CBCT-derived variables, such as radiographic density (RD) as gray values, were calculated as gray values. Pearson's correlation, one-way analysis of variance (ANOVA), and accuracy (sensitivity and specificity) evaluation based on linear and logistic regression were performed to choose the variable that best correlated with the lumbar and femoral neck T-scores. Results RD of the whole bone area of the mandible was the variable that best correlated with and predicted both the femoral neck and the lumbar vertebrae T-scores; further, Pearson's correlation coefficients were 0.5/0.6 (p value=0.037/0.009). The sensitivity, specificity, and accuracy based on the logistic regression were 50%, 88.9%, and 78.4%, respectively, for the femoral neck, and 46.2%, 91.3%, and 75%, respectively, for the lumbar vertebrae. Conclusion Lumbar vertebrae and femoral neck osteoporosis can be predicted with high accuracy from the RD value of the body of the mandible by using a CBCT viewer program. PMID:25473633

  6. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history.

    PubMed

    Kurth, Ann E; Martin, Diane P; Golden, Matthew R; Weiss, Noel S; Heagerty, Patrick J; Spielberg, Freya; Handsfield, H Hunter; Holmes, King K

    2004-12-01

    The objective of this study was to compare reporting between audio computer-assisted self-interview (ACASI) and clinician-administered sexual histories. The goal of this study was to explore the usefulness of ACASI in sexually transmitted disease (STD) clinics. The authors conducted a cross-sectional study of ACASI followed by a clinician history (CH) among 609 patients (52% male, 59% white) in an urban, public STD clinic. We assessed completeness of data, item prevalence, and report concordance for sexual history and patient characteristic variables classified as socially neutral (n=5), sensitive (n=11), or rewarded (n=4). Women more often reported by ACASI than during CH same-sex behavior (19.6% vs. 11.5%), oral sex (67.3% vs. 50.0%), transactional sex (20.7% vs. 9.8%), and amphetamine use (4.9% vs. 0.7%) but were less likely to report STD symptoms (55.4% vs. 63.7%; all McNemar chi-squared P values <0.003). Men's reporting was similar between interviews, except for ever having had sex with another man (36.9% ACASI vs. 28.7% CH, P <0.001). Reporting agreement as measured by kappas and intraclass correlation coefficients was only moderate for socially sensitive and rewarded variables but was substantial or almost perfect for socially neutral variables. ACASI data tended to be more complete. ACASI was acceptable to 89% of participants. ACASI sexual histories may help to identify persons at risk for STDs.

  7. Computational flow predictions for hypersonic drag devices

    NASA Technical Reports Server (NTRS)

    Tokarcik, Susan A.; Venkatapathy, Ethiraj

    1993-01-01

    The effectiveness of two types of hypersonic decelerators is examined: mechanically deployable flares and inflatable ballutes. Computational fluid dynamics (CFD) is used to predict the flowfield around a solid rocket motor (SRM) with a deployed decelerator. The computations are performed with an ideal gas solver using an effective specific heat ratio of 1.15. The results from the ideal gas solver are compared to computational results from a thermochemical nonequilibrium solver. The surface pressure coefficient, the drag, and the extend of the compression corner separation zone predicted by the ideal gas solver compare well with those predicted by the nonequilibrium solver. The ideal gas solver is computationally inexpensive and is shown to be well suited for preliminary design studies. The computed solutions are used to determine the size and shape of the decelerator that are required to achieve a drag coefficient of 5. Heat transfer rates to the SRM and the decelerators are predicted to estimate the amount of thermal protection required.

  8. The Measurement of Aerosol Optical Properties using Continuous Wave Cavity Ring-Down Techniques

    NASA Technical Reports Server (NTRS)

    Strawa, Anthony W.; Castaneda, Rene; Owano, Thomas; Baer, Douglas S.; Paldus, Barbara A.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    Large uncertainties in the effects that aerosols have on climate require improved in situ measurements of extinction coefficient and single-scattering albedo. This paper describes the use of continuous wave cavity ring-down (CW-CRD) technology to address this problem. The innovations in this instrument are the use of CW-CRD to measure aerosol extinction coefficient, the simultaneous measurement of scattering coefficient, and small size suitable for a wide range of aircraft applications. Our prototype instrument measures extinction and scattering coefficient at 690 nm and extinction coefficient at 1550 nm. The instrument itself is small (60 x 48 x 15 cm) and relatively insensitive to vibrations. The prototype instrument has been tested in our lab and used in the field. While improvements in performance are needed, the prototype has been shown to make accurate and sensitive measurements of extinction and scattering coefficients. Combining these two parameters, one can obtain the single-scattering albedo and absorption coefficient, both important aerosol properties. The use of two wavelengths also allows us to obtain a quantitative idea of the size of the aerosol through the Angstrom exponent. Minimum sensitivity of the prototype instrument is 1.5 x 10(exp -6)/m (1.5 M/m). Validation of the measurement of extinction coefficient has been accomplished by comparing the measurement of calibration spheres with Mie calculations. This instrument and its successors have potential to help reduce uncertainty currently associated with aerosol optical properties and their spatial and temporal variation. Possible applications include studies of visibility, climate forcing by aerosol, and the validation of aerosol retrieval schemes from satellite data.

  9. The Measurement of Aerosol Optical Properties Using Continuous Wave Cavity Ring-Down Techniques

    NASA Technical Reports Server (NTRS)

    Strawa, A. W.; Owano, T.; Castaneda, R.; Baer, D. S.; Paldus, B. A.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    Large uncertainties in the effects that aerosols have on climate require improved in-situ measurements of extinction coefficient and single-scattering albedo. This abstract describes the use of continuous wave cavity ring-down (CW-CRD) technology to address this problem. The innovations in this instrument are the use of CW-CRD to measure aerosol extinction coefficient, the simultaneous measurement of scattering coefficient, and small size suitable for a wide range of aircraft applications. Our prototype instrument measures extinction and scattering coefficient at 690 nm and extinction coefficient at 1550 nm. The instrument itself is small (60 x 48 x 15 cm) and relatively insensitive to vibrations. The prototype instrument has been tested in our lab and used in the field. While improvements in performance are needed, the prototype has been shown to make accurate and sensitive measurements of extinction and scattering coefficients. Combining these two parameters, one can obtain the single-scattering albedo and absorption coefficient, both important aerosol properties. The use of two wavelengths also allows us to obtain a quantitative idea of the size of the aerosol through the Angstrom exponent. Minimum sensitivity of the prototype instrument is 1.5 x 10(exp -6)/m (1.5/Mm). Validation of the measurement of extinction coefficient has been accomplished by comparing the measurement of calibration spheres with Mie calculations. This instrument and its successors have potential to help reduce uncertainty currently associated with aerosol optical properties and their spatial and temporal variation. Possible applications include studies of visibility, climate forcing by aerosol, and the validation of aerosol retrieval schemes from satellite data.

  10. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    PubMed

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).

  11. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms

    PubMed Central

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370

  12. Thermal Coefficient of Linear Expansion Modified by Dendritic Segregation in Nickel-Iron Alloys

    NASA Astrophysics Data System (ADS)

    Ogorodnikova, O. M.; Maksimova, E. V.

    2018-05-01

    The paper presents investigations of thermal properties of Fe-Ni and Fe-Ni-Co casting alloys affected by the heterogeneous distribution of their chemical elements. It is shown that nickel dendritic segregation has a negative effect on properties of studied invars. A mathematical model is proposed to explore the influence of nickel dendritic segregation on the thermal coefficient of linear expansion (TCLE) of the alloy. A computer simulation of TCLE of Fe-Ni-Co superinvars is performed with regard to a heterogeneous distribution of their chemical elements over the whole volume. The ProLigSol computer software application is developed for processing the data array and results of computer simulation.

  13. Basal measures of insulin sensitivity and insulin secretion and simplified glucose tolerance tests in dogs.

    PubMed

    Verkest, K R; Fleeman, L M; Rand, J S; Morton, J M

    2010-10-01

    There is need for simple, inexpensive measures of glucose tolerance, insulin sensitivity, and insulin secretion in dogs. The aim of this study was to estimate the closeness of correlation between fasting and dynamic measures of insulin sensitivity and insulin secretion, the precision of fasting measures, and the agreement between results of standard and simplified glucose tolerance tests in dogs. A retrospective descriptive study using 6 naturally occurring obese and 6 lean dogs was conducted. Data from frequently sampled intravenous glucose tolerance tests (FSIGTTs) in 6 obese and 6 lean client-owned dogs were used to calculate HOMA, QUICKI, fasting glucose and insulin concentrations. Fasting measures of insulin sensitivity and secretion were compared with MINMOD analysis of FSIGTTs using Pearson correlation coefficients, and they were evaluated for precision by the discriminant ratio. Simplified sampling protocols were compared with standard FSIGTTs using Lin's concordance correlation coefficients, limits of agreement, and Pearson correlation coefficients. All fasting measures except fasting plasma glucose concentration were moderately correlated with MINMOD-estimated insulin sensitivity (|r| = 0.62-0.80; P < 0.03), and those that combined fasting insulin and glucose were moderately closely correlated with MINMOD-estimated insulin secretion (r = 0.60-0.79; P < 0.04). HOMA calculated using the nonlinear formulae had the closest estimated correlation (r = 0.77 and 0.74) and the best discrimination for insulin sensitivity and insulin secretion (discriminant ratio 4.4 and 3.4, respectively). Simplified sampling protocols with half as many samples collected over 3 h had close agreement with the full sampling protocol. Fasting measures and simplified intravenous glucose tolerance tests reflect insulin sensitivity and insulin secretion derived from frequently sampled glucose tolerance tests with MINMOD analysis in dogs. Copyright 2010 Elsevier Inc. All rights reserved.

  14. The influence of adhesive on fiber Bragg grating strain sensor

    NASA Astrophysics Data System (ADS)

    Chen, Jixuan; Gong, Huaping; Jin, Shangzhong; Li, Shuhua

    2009-08-01

    A fiber Bragg grating (FBG) sensor was fixed on the uniform strength beam with three adhesives, which were modified acrylate, glass glue and epoxy resin. The influence of adhesive on FBG strain sensor was investigated. The strain of FBG sensor was varied by loading weight to the uniform strength beam. The wavelength shift of the FBG sensor fixed by the three kinds of adhesive were measured with different weight at the temperatures 0°C, 10°C, 20°C, 30°C, 40°C. The linearity, sensitivity and their stability at different temperature of FBG sensor which fixed by every kind of adhesives were analyzed. The results show that, the FBG sensor fixed by the modified acrylate has a high linearity, and the linear correlation coefficient is 0.9996. It also has a high sensitivity which is 0.251nm/kg. The linearity and the sensitivity of the FBG sensor have a high stability at different temperatures. The FBG sensor fixed by the glass glue also has a high linearity, and the linear correlation coefficient is 0.9986, but it has a low sensitivity which is only 0.041nm/kg. The linearity and the sensitivity of the FBG sensor fixed by the glass glue have a high stability at different temperatures. When the FBG sensor is fixed by epoxy resin, the sensitivity and linearity is affected significantly by the temperature. When the temperature changes from 0°C to 40°C, the sensitivity decreases from 0.302nm/kg to 0.058nm/kg, and the linear correlation coefficient decreases from 0.9999 to 0.9961.

  15. Understanding the effects of diffusion and relaxation in magnetic resonance imaging using computational modeling

    NASA Astrophysics Data System (ADS)

    Russell, Greg

    The work described in this dissertation was motivated by a desire to better understand the cellular pathology of ischemic stroke. Two of the three bodies of research presented herein address and issue directly related to the investigation of ischemic stroke through the use of diffusion weighted magnetic resonance imaging (DWMRI) methods. The first topic concerns the development of a computationally efficient finite difference method, designed to evaluate the impact of microscopic tissue properties on the formation of DWMRI signal. For the second body of work, the effect of changing the intrinsic diffusion coefficient of a restricted sample on clinical DWMRI experiments is explored. The final body of work, while motivated by the desire to understand stroke, addresses the issue of acquiring large amounts of MRI data well suited for quantitative analysis in reduced scan time. In theory, the method could be used to generate quantitative parametric maps, including those depicting information gleaned through the use of DWMRI methods. Chapter 1 provides an introduction to several topics. A description of the use of DWMRI methods in the study of ischemic stroke is covered. An introduction to the fundamental physical principles at work in MRI is also provided. In this section the means by which magnetization is created in MRI experiments, how MRI signal is induced, as well as the influence of spin-spin and spin-lattice relaxation are discussed. Attention is also given to describing how MRI measurements can be sensitized to diffusion through the use of qualitative and quantitative descriptions of the process. Finally, the reader is given a brief introduction to the use of numerical methods for solving partial differential equations. In Chapters 2, 3 and 4, three related bodies of research are presented in terms of research papers. In Chapter 2, a novel computational method is described. The method reduces the computation resources required to simulate DWMRI experiments. In Chapter 3, a detailed study on how changes in the intrinsic intracellular diffusion coefficient may influence clinical DWMRI experiments is described. In Chapter 4, a novel, non-steady state quantitative MRI method is described.

  16. Sensitivity analysis of bi-layered ceramic dental restorations.

    PubMed

    Zhang, Zhongpu; Zhou, Shiwei; Li, Qing; Li, Wei; Swain, Michael V

    2012-02-01

    The reliability and longevity of ceramic prostheses have become a major concern. The existing studies have focused on some critical issues from clinical perspectives, but more researches are needed to address fundamental sciences and fabrication issues to ensure the longevity and durability of ceramic prostheses. The aim of this paper was to explore how "sensitive" the thermal and mechanical responses, in terms of changes in temperature and thermal residual stress of the bi-layered ceramic systems and crown models will be with respect to the perturbation of the design variables chosen (e.g. layer thickness and heat transfer coefficient) in a quantitative way. In this study, three bi-layered ceramic models with different geometries are considered: (i) a simple bi-layered plate, (ii) a simple bi-layer triangle, and (iii) an axisymmetric bi-layered crown. The layer thickness and convective heat transfer coefficient (or cooling rate) seem to be more sensitive for the porcelain fused on zirconia substrate models. The resultant sensitivities indicate a critical importance of the heat transfer coefficient and thickness ratio of core to veneer on the temperature distributions and residual stresses in each model. The findings provide a quantitative basis for assessing the effects of fabrication uncertainties and optimizing the design of ceramic prostheses. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  17. PAN AIR: A computer program for predicting subsonic or supersonic linear potential flows about arbitrary configurations using a higher order panel method. Volume 1: Theory document (version 3.0)

    NASA Technical Reports Server (NTRS)

    Epton, Michael A.; Magnus, Alfred E.

    1990-01-01

    An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the Panel Aerodynamics (PAN AIR) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformation, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments. Principal revisions to version 3.0 are the following: (1) appendices H and K more fully describe the Aerodynamic Influence Coefficient (AIC) construction; (2) appendix L now provides a complete description of the AIC solution process; (3) appendix P is new and discusses the theory for the new FDP module (which calculates streamlines and offbody points); and (4) numerous small corrections and revisions reflecting the MAG module rewrite.

  18. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  19. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  20. Active Control of Fan Noise: Feasibility Study. Volume 5; Numerical Computation of Acoustic Mode Reflection Coefficients for an Unflanged Cylindrical Duct

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.

    1996-01-01

    A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.

  1. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  2. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  3. Earth's isostatic gravity anomaly field: Contributions to National Geodetic Satellite Program

    NASA Technical Reports Server (NTRS)

    Khan, M. A.

    1973-01-01

    On the assumption that the compensation for the topographic load is achieved in the manner of Airy-Heiskenenan hypothesis at a compensation depth of 30 kilometers, the spherical harmonic coefficients of the isostatic reduction potential U are computed. The degree power spectra of these coefficients are compared with the power spectra of the isostatic reduction coefficients given by Uotila. Results are presented in tabular form.

  4. Laser Measurement Of Convective-Heat-Transfer Coefficient

    NASA Technical Reports Server (NTRS)

    Porro, A. Robert; Hingst, Warren R.; Chriss, Randall M.; Seablom, Kirk D.; Keith, Theo G., Jr.

    1994-01-01

    Coefficient of convective transfer of heat at spot on surface of wind-tunnel model computed from measurements acquired by developmental laser-induced-heat-flux technique. Enables non-intrusive measurements of convective-heat-transfer coefficients at many points across surfaces of models in complicated, three-dimensional, high-speed flows. Measurement spot scanned across surface of model. Apparatus includes argon-ion laser, attenuator/beam splitter electronic shutter infrared camera, and subsystem.

  5. Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Hoadley, Sherwood Tiffany

    1991-01-01

    The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.

  6. Parametric study of microwave-powered high-altitude airplane platforms designed for linear flight

    NASA Technical Reports Server (NTRS)

    Morris, C. E. K., Jr.

    1981-01-01

    The performance of a class of remotely piloted, microwave powered, high altitude airplane platforms is studied. The first part of each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam; this is followed by gliding flight back to a minimum altitude above a microwave station and initiation of another cycle. Parametric variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the energy transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and the increase of lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.

  7. Generalized uncertainty principle and quantum gravity phenomenology

    NASA Astrophysics Data System (ADS)

    Bosso, Pasquale

    The fundamental physical description of Nature is based on two mutually incompatible theories: Quantum Mechanics and General Relativity. Their unification in a theory of Quantum Gravity (QG) remains one of the main challenges of theoretical physics. Quantum Gravity Phenomenology (QGP) studies QG effects in low-energy systems. The basis of one such phenomenological model is the Generalized Uncertainty Principle (GUP), which is a modified Heisenberg uncertainty relation and predicts a deformed canonical commutator. In this thesis, we compute Planck-scale corrections to angular momentum eigenvalues, the hydrogen atom spectrum, the Stern-Gerlach experiment, and the Clebsch-Gordan coefficients. We then rigorously analyze the GUP-perturbed harmonic oscillator and study new coherent and squeezed states. Furthermore, we introduce a scheme for increasing the sensitivity of optomechanical experiments for testing QG effects. Finally, we suggest future projects that may potentially test QG effects in the laboratory.

  8. Microwave Assisted Synthesis, Physicochemical, Photophysical, Single Crystal X-ray and DFT Studies of Novel Push-Pull Chromophores.

    PubMed

    Khan, Salman A; Asiri, Abdullah M; Basisi, Hadi Mussa; Arshad, Muhammad Nadeem; Sharma, Kamlesh

    2015-11-01

    Two push-pull chromophores were synthesized by knoevenagel condensation under microwave irradiation. The structure of synthesized chromophores were established by spectroscopic (FT-IR, (1)H NMR, (13)C NMR, EI-MS) and elemental analysis. Structure of the chromophores was further conformed by X-ray crystallographic. UV-Vis and fluorescence spectroscopy measurements provided that chromophores were good absorbent and fluorescent properties. Fluorescence polarity studies demonstrated that chromophores were sensitive to the polarity of the microenvironment provided by different solvents. Physicochemical parameters, including singlet absorption, extinction coefficient, stokes shift, oscillator strength, dipole moment and flurescence quantum yield were investigated in order to explore the analytical potential of the synthesized chromophores. In addition, the total energy, frontier molecular orbitals, hardness, electron affinity, ionization energy, electrostatic potential map were also studied computationally by using density functional theoretical method.

  9. [Computer diagnosis of traumatic impact by hepatic lesion].

    PubMed

    Kimbar, V I; Sevankeev, V V

    2007-01-01

    A method of computer-assisted diagnosis of traumatic affection by liver damage (HEPAR-test program) is described. The program is based on calculated diagnostic coefficients using Bayes' probability method with Wald's recognition procedure.

  10. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  11. A new computer-based Farnsworth Munsell 100-hue test for evaluation of color vision.

    PubMed

    Ghose, Supriyo; Parmar, Twinkle; Dada, Tanuj; Vanathi, Murugesan; Sharma, Sourabh

    2014-08-01

    To evaluate a computer-based Farnsworth-Munsell (FM) 100-hue test and compare it with a manual FM 100-hue test in normal and congenital color-deficient individuals. Fifty color defective subjects and 200 normal subjects with a best-corrected visual acuity ≥ 6/12 were compared using a standard manual FM 100-hue test and a computer-based FM 100-hue test under standard operating conditions as recommended by the manufacturer after initial trial testing. Parameters evaluated were total error scores (TES), type of defect and testing time. Pearson's correlation coefficient was used to determine the relationship between the test scores. Cohen's kappa was used to assess agreement of color defect classification between the two tests. A receiver operating characteristic curve was used to determine the optimal cut-off score for the computer-based FM 100-hue test. The mean time was 16 ± 1.5 (range 6-20) min for the manual FM 100-hue test and 7.4 ± 1.4 (range 5-13) min for the computer-based FM 100-hue test, thus reducing testing time to <50 % (p < 0.05). For grading color discrimination, Pearson's correlation coefficient for TES between the two tests was 0.91 (p < 0.001). For color defect classification, Cohen's agreement coefficient was 0.98 (p < 0.01). The computer-based FM 100-hue is an effective and rapid method for detecting, classifying and grading color vision anomalies.

  12. A Non-Cut Cell Immersed Boundary Method for Use in Icing Simulations

    NASA Technical Reports Server (NTRS)

    Sarofeen, Christian M.; Noack, Ralph W.; Kreeger, Richard E.

    2013-01-01

    This paper describes a computational fluid dynamic method used for modelling changes in aircraft geometry due to icing. While an aircraft undergoes icing, the accumulated ice results in a geometric alteration of the aerodynamic surfaces. In computational simulations for icing, it is necessary that the corresponding geometric change is taken into consideration. The method used, herein, for the representation of the geometric change due to icing is a non-cut cell Immersed Boundary Method (IBM). Computational cells that are in a body fitted grid of a clean aerodynamic geometry that are inside a predicted ice formation are identified. An IBM is then used to change these cells from being active computational cells to having properties of viscous solid bodies. This method has been implemented in the NASA developed node centered, finite volume computational fluid dynamics code, FUN3D. The presented capability is tested for two-dimensional airfoils including a clean airfoil, an iced airfoil, and an airfoil in harmonic pitching motion about its quarter chord. For these simulations velocity contours, pressure distributions, coefficients of lift, coefficients of drag, and coefficients of pitching moment about the airfoil's quarter chord are computed and used for comparison against experimental results, a higher order panel method code with viscous effects, XFOIL, and the results from FUN3D's original solution process. The results of the IBM simulations show that the accuracy of the IBM compares satisfactorily with the experimental results, XFOIL results, and the results from FUN3D's original solution process.

  13. Friction in a Moving Car

    ERIC Educational Resources Information Center

    Goldberg, Fred M.

    1975-01-01

    Describes an out-of-doors, partially unstructured experiment to determine the coefficient of friction for a moving car. Presents the equation which relates the coefficient of friction to initial velocity, distance, and time and gives sample computed values as a function of initial speed and tire pressure. (GS)

  14. Communication: rate coefficients from quasiclassical trajectory calculations from the reverse reaction: The Mu + H2 reaction re-visited.

    PubMed

    Homayoon, Zahra; Jambrina, Pablo G; Aoiz, F Javier; Bowman, Joel M

    2012-07-14

    In a previous paper [P. G. Jambrina et al., J. Chem. Phys. 135, 034310 (2011)] various calculations of the rate coefficient for the Mu + H(2) → MuH + H reaction were presented and compared to experiment. The widely used standard quasiclassical trajectory (QCT) method was shown to overestimate the rate coefficients by several orders of magnitude over the temperature range 200-1000 K. This was attributed to a major failure of that method to describe the correct threshold for the reaction owing to the large difference in zero-point energies (ZPE) of the reactant H(2) and product MuH (∼0.32 eV). In this Communication we show that by performing standard QCT calculations for the reverse reaction and then applying detailed balance, the resulting rate coefficient is in very good agreement with the other computational results that respect the ZPE, (as well as with the experiment) but which are more demanding computationally.

  15. Communication: Rate coefficients from quasiclassical trajectory calculations from the reverse reaction: The Mu + H2 reaction re-visited

    NASA Astrophysics Data System (ADS)

    Homayoon, Zahra; Jambrina, Pablo G.; Aoiz, F. Javier; Bowman, Joel M.

    2012-07-01

    In a previous paper [P. G. Jambrina et al., J. Chem. Phys. 135, 034310 (2011), 10.1063/1.3611400] various calculations of the rate coefficient for the Mu + H2 → MuH + H reaction were presented and compared to experiment. The widely used standard quasiclassical trajectory (QCT) method was shown to overestimate the rate coefficients by several orders of magnitude over the temperature range 200-1000 K. This was attributed to a major failure of that method to describe the correct threshold for the reaction owing to the large difference in zero-point energies (ZPE) of the reactant H2 and product MuH (˜0.32 eV). In this Communication we show that by performing standard QCT calculations for the reverse reaction and then applying detailed balance, the resulting rate coefficient is in very good agreement with the other computational results that respect the ZPE, (as well as with the experiment) but which are more demanding computationally.

  16. Aerodynamics of a linear oscillating cascade

    NASA Technical Reports Server (NTRS)

    Buffum, Daniel H.; Fleeter, Sanford

    1990-01-01

    The steady and unsteady aerodynamics of a linear oscillating cascade are investigated using experimental and computational methods. Experiments are performed to quantify the torsion mode oscillating cascade aerodynamics of the NASA Lewis Transonic Oscillating Cascade for subsonic inlet flowfields using two methods: simultaneous oscillation of all the cascaded airfoils at various values of interblade phase angle, and the unsteady aerodynamic influence coefficient technique. Analysis of these data and correlation with classical linearized unsteady aerodynamic analysis predictions indicate that the wind tunnel walls enclosing the cascade have, in some cases, a detrimental effect on the cascade unsteady aerodynamics. An Euler code for oscillating cascade aerodynamics is modified to incorporate improved upstream and downstream boundary conditions and also the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic predictions of the code, and the computational unsteady aerodynamic influence coefficient technique is shown to be a viable alternative for calculation of oscillating cascade aerodynamics.

  17. Evaluation of Computational Fluid Dynamics and Coupled Fluid-Solid Modeling for a Direct Transfer Preswirl System.

    PubMed

    Javiya, Umesh; Chew, John; Hills, Nick; Dullenkopf, Klaus; Scanlon, Timothy

    2013-05-01

    The prediction of the preswirl cooling air delivery and disk metal temperature are important for the cooling system performance and the rotor disk thermal stresses and life assessment. In this paper, standalone 3D steady and unsteady computation fluid dynamics (CFD), and coupled FE-CFD calculations are presented for prediction of these temperatures. CFD results are compared with previous measurements from a direct transfer preswirl test rig. The predicted cooling air temperatures agree well with the measurement, but the nozzle discharge coefficients are under predicted. Results from the coupled FE-CFD analyses are compared directly with thermocouple temperature measurements and with heat transfer coefficients on the rotor disk previously obtained from a rotor disk heat conduction solution. Considering the modeling limitations, the coupled approach predicted the solid metal temperatures well. Heat transfer coefficients on the rotor disk from CFD show some effect of the temperature variations on the heat transfer coefficients. Reasonable agreement is obtained with values deduced from the previous heat conduction solution.

  18. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  19. Transport coefficients of liquid CF4 and SF6 computed by molecular dynamics using polycenter Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1989-01-01

    For several liquid states of CF4 and SF4, the shear and the bulk viscosity as well as the thermal conductivity were determined by equilibrium molecular dynamics (MD) calculations. Lennard-Jones four- and six-center pair potentials were applied, and the method of constraints was chosen for the MD. The computed Green-Kubo integrands show a steep time decay, and no particular longtime behavior occurs. The molecule number dependence of the results is found to be small, and 3×105 integration steps allow an accuracy of about 10% for the shear viscosity and the thermal conductivity coefficient. Comparison with experimental data shows a fair agreement for CF4, while for SF6 the transport coefficients fall below the experimental ones by about 30%.

  20. Consider a non-spherical elephant: computational fluid dynamics simulations of heat transfer coefficients and drag verified using wind tunnel experiments.

    PubMed

    Dudley, Peter N; Bonazza, Riccardo; Porter, Warren P

    2013-07-01

    Animal momentum and heat transfer analysis has historically used direct animal measurements or approximations to calculate drag and heat transfer coefficients. Research can now use modern 3D rendering and computational fluid dynamics software to simulate animal-fluid interactions. Key questions are the level of agreement between simulations and experiments and how superior they are to classical approximations. In this paper we compared experimental and simulated heat transfer and drag calculations on a scale model solid aluminum African elephant casting. We found good agreement between experimental and simulated data and large differences from classical approximations. We used the simulation results to calculate coefficients for heat transfer and drag of the elephant geometry. Copyright © 2013 Wiley Periodicals, Inc.

  1. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  2. The computation of ICRP dose coefficients for intakes of radionuclides with PLEIADES: biokinetic aspects.

    PubMed

    Fell, T P

    2007-01-01

    The ICRP has published dose coefficients for the ingestion or inhalation of radionuclides in a series of reports covering intakes by workers and members of the public including children and pregnant or lactating women. The calculation of these coefficients conveniently divides into two distinct parts--the biokinetic and dosimetric. This paper gives a brief summary of the methods used to solve the biokinetic problem in the generation of dose coefficients on behalf of the ICRP, as implemented in the Health Protection Agency's internal dosimetry code PLEIADES.

  3. Experimental Testing and Computational Fluid Dynamics Simulation of Maple Seeds and Performance Analysis as a Wind Turbine

    NASA Astrophysics Data System (ADS)

    Holden, Jacob R.

    Descending maple seeds generate lift to slow their fall and remain aloft in a blowing wind; have the wings of these seeds evolved to descend as slowly as possible? A unique energy balance equation, experimental data, and computational fluid dynamics simulations have all been developed to explore this question from a turbomachinery perspective. The computational fluid dynamics in this work is the first to be performed in the relative reference frame. Maple seed performance has been analyzed for the first time based on principles of wind turbine analysis. Application of the Betz Limit and one-dimensional momentum theory allowed for empirical and computational power and thrust coefficients to be computed for maple seeds. It has been determined that the investigated species of maple seeds perform near the Betz limit for power conversion and thrust coefficient. The power coefficient for a maple seed is found to be in the range of 48-54% and the thrust coefficient in the range of 66-84%. From Betz theory, the stream tube area expansion of the maple seed is necessary for power extraction. Further investigation of computational solutions and mechanical analysis find three key reasons for high maple seed performance. First, the area expansion is driven by maple seed lift generation changing the fluid momentum and requiring area to increase. Second, radial flow along the seed surface is promoted by a sustained leading edge vortex that centrifuges low momentum fluid outward. Finally, the area expansion is also driven by the spanwise area variation of the maple seed imparting a radial force on the flow. These mechanisms result in a highly effective device for the purpose of seed dispersal. However, the maple seed also provides insight into fundamental questions about how turbines can most effectively change the momentum of moving fluids in order to extract useful power or dissipate kinetic energy.

  4. All-optical computation system for solving differential equations based on optical intensity differentiator.

    PubMed

    Tan, Sisi; Wu, Zhao; Lei, Lei; Hu, Shoujin; Dong, Jianji; Zhang, Xinliang

    2013-03-25

    We propose and experimentally demonstrate an all-optical differentiator-based computation system used for solving constant-coefficient first-order linear ordinary differential equations. It consists of an all-optical intensity differentiator and a wavelength converter, both based on a semiconductor optical amplifier (SOA) and an optical filter (OF). The equation is solved for various values of the constant-coefficient and two considered input waveforms, namely, super-Gaussian and Gaussian signals. An excellent agreement between the numerical simulation and the experimental results is obtained.

  5. Examination of the Relation between the Values of Adolescents and Virtual Sensitiveness

    ERIC Educational Resources Information Center

    Yilmaz, Hasan

    2013-01-01

    The aim of this study is to examine the relation between the values adolescents have and virtual sensitiveness. The study is carried out on 447 adolescents, 160 of whom are female, 287 males. The Humanistic Values Scale and Virtual Sensitiveness scale were used. Pearson Product Moment Coefficient and multiple regression analysis techniques were…

  6. Snapping Sharks, Maddening Mindreaders, and Interactive Images: Teaching Correlation.

    ERIC Educational Resources Information Center

    Mitchell, Mark L.

    Understanding correlation coefficients is difficult for students. A free computer program that helps introductory psychology students distinguish between positive and negative correlation, and which also teaches them to understand the differences between correlation coefficients of different size is described in this paper. The program is…

  7. A Simple Measurement of the Sliding Friction Coefficient

    ERIC Educational Resources Information Center

    Gratton, Luigi M.; Defrancesco, Silvia

    2006-01-01

    We present a simple computer-aided experiment for investigating Coulomb's law of sliding friction in a classroom. It provides a way of testing the possible dependence of the friction coefficient on various parameters, such as types of materials, normal force, apparent area of contact and sliding velocity.

  8. Validity of Rorschach Inkblot scores for discriminating psychopaths from non-psychopaths in forensic populations: a meta-analysis.

    PubMed

    Wood, James M; Lilienfeld, Scott O; Nezworski, M Teresa; Garb, Howard N; Allen, Keli Holloway; Wildermuth, Jessica L

    2010-06-01

    Gacono and Meloy (2009) have concluded that the Rorschach Inkblot Test is a sensitive instrument with which to discriminate psychopaths from nonpsychopaths. We examined the association of psychopathy with 37 Rorschach variables in a meta-analytic review of 173 validity coefficients derived from 22 studies comprising 780 forensic participants. All studies included the Hare Psychopathy Checklist or one of its versions (Hare, 1980, 1991, 2003) and Exner's (2003) Comprehensive System for the Rorschach. Mean validity coefficients of Rorschach variables in the meta-analysis ranged from -.113 to .239, with a median validity of .070 and a mean validity of .062. Psychopathy displayed a significant and medium-sized association with the number of Aggressive Potential responses (weighted mean validity coefficient = .232) and small but significant associations with the Sum of Texture responses, Cooperative Movement = 0, the number of Personal responses, and the Egocentricity Index (weighted mean validity coefficients = .097 to .159). The remaining 32 Rorschach variables were not significantly related to psychopathy. The present findings contradict the view that the Rorschach is a clinically sensitive instrument for discriminating psychopaths from nonpsychopaths.

  9. An in vitro human skin test for assessing sensitization potential.

    PubMed

    Ahmed, S S; Wang, X N; Fielding, M; Kerry, A; Dickinson, I; Munuswamy, R; Kimber, I; Dickinson, A M

    2016-05-01

    Sensitization to chemicals resulting in an allergy is an important health issue. The current gold-standard method for identification and characterization of skin-sensitizing chemicals was the mouse local lymph node assay (LLNA). However, for a number of reasons there has been an increasing imperative to develop alternative approaches to hazard identification that do not require the use of animals. Here we describe a human in-vitro skin explant test for identification of sensitization hazards and the assessment of relative skin sensitizing potency. This method measures histological damage in human skin as a readout of the immune response induced by the test material. Using this approach we have measured responses to 44 chemicals including skin sensitizers, pre/pro-haptens, respiratory sensitizers, non-sensitizing chemicals (including skin-irritants) and previously misclassified compounds. Based on comparisons with the LLNA, the skin explant test gave 95% specificity, 95% sensitivity, 95% concordance with a correlation coefficient of 0.9. The same specificity and sensitivity were achieved for comparison of results with published human sensitization data with a correlation coefficient of 0.91. The test also successfully identified nickel sulphate as a human skin sensitizer, which was misclassified as negative in the LLNA. In addition, sensitizers and non-sensitizers identified as positive or negative by the skin explant test have induced high/low T cell proliferation and IFNγ production, respectively. Collectively, the data suggests the human in-vitro skin explant test could provide the basis for a novel approach for characterization of the sensitizing activity as a first step in the risk assessment process. Copyright © 2015 John Wiley & Sons, Ltd.

  10. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hub, Martina; Thieke, Christian; Kessler, Marc L.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts formore » the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.« less

  11. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  12. The Hermann-Hering grid illusion demonstrates disruption of lateral inhibition processing in diabetes mellitus.

    PubMed

    Davies, Nigel P; Morland, Antony B

    2002-02-01

    The Hermann-Hering grid illusion consists of dark illusory spots perceived at the intersections of horizontal and vertical white bars viewed against a dark background. The dark spots originate from lateral inhibition processing. This illusion was used to investigate the hypothesis that lateral inhibition may be disrupted in diabetes mellitus. A computer monitor based psychophysical test was developed to measure the threshold of perception of the illusion for different bar widths. The contrast threshold for illusion perception at seven bar widths (range 0.09 degrees to 0.60 degrees) was measured using a randomly interleaved double staircase. Convolution of Hermann-Hering grids with difference of Gaussian receptive fields was used to generate model sensitivity functions. The method of least squares was used to fit these to the experimental data. 14 diabetic patients and 12 control subjects of similar ages performed the test. The sensitivity to the illusion was significantly reduced in the diabetic group for bar widths 0.22 degrees, 0.28 degrees, and 0.35 degrees (p = 0.01). The mean centre:surround ratio for the controls was 1:9.1 (SD 1.6) with a mean correlation coefficient of R(2) = 0.80 (SD 0.16). In the diabetic group, two subjects were unable to perceive the illusion. The mean centre:surround ratio for the 12 remaining diabetic patients was 1:8.6 (SD 2.1). However, the correlation coefficients were poor with a mean of R(2) = 0.54 (SD 0.27), p = 0.04 in comparison with the control group. A difference of Gaussian receptive field model fits the experimental data well for the controls but does not fit the data obtained for the diabetics. This indicates dysfunction of the lateral inhibition processes in the post-receptoral pathway.

  13. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    PubMed Central

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-01-01

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well. PMID:22482640

  14. Using Precept-Assist® to predict performance on the American Board of Family Medicine In-Training Examination.

    PubMed

    Post, Robert E; Jamena, Gemma P; Gamble, James D

    2014-09-01

    Precept-Assist® (PA) is a computer-based program developed by the Virtua Family Medicine Residency where residents receive a score on a Likert-type scale from an attending for each precept based on their knowledge base. The purpose of this study was to attempt to validate this program for precepting family medicine residents. This was a validation study. PA and American Board of Family Medicine (ABFM) In-Training Exam (ITE) scores for all residents from a community-based family medicine residency between the years 2002 and 2011 were included (n=216). Pearson correlation coefficients were calculated between PA scores for the second quarter of the academic year (October 1 to December 31) and scores on the ITE. An ROC curve was also created to determine sensitivity and specificity for various PA scores in predicting residents scoring 500 or above on the ITE. The PA mean (SD) score was 5.18 (0.84) and the ITE mean (SD) score was 425.1 (87.6). The Pearson correlation coefficient between PA and ITE scores was 0.55, which is a moderately positive correlation. The AUC of the ROC curve was 0.783 (95% CI 0.704-0.859). A PA score of 5.5 (between the level of a PGY-2 and PGY-3) was 72% sensitive and 77% specific for scoring 500 or above on the ITE with a positive LR of 3.12. There is a significant correlation between PA scores and ABFM In-Training Exam scores. PA is a valid screening tool that can be used as a predictor for future performance in Family Medicine In-Training exams.

  15. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  16. Validation of multi-detector computed tomography as a non-invasive method for measuring ovarian volume in macaques (Macaca fascicularis).

    PubMed

    Jones, Jeryl C; Appt, Susan E; Werre, Stephen R; Tan, Joshua C; Kaplan, Jay R

    2010-06-01

    The purpose of this study was to validate low radiation dose, contrast-enhanced, multi-detector computed tomography (MDCT) as a non-invasive method for measuring ovarian volume in macaques. Computed tomography scans of four known-volume phantoms and nine mature female cynomolgus macaques were acquired using a previously described, low radiation dose scanning protocol, intravenous contrast enhancement, and a 32-slice MDCT scanner. Immediately following MDCT, ovaries were surgically removed and the ovarian weights were measured. The ovarian volumes were determined using water displacement. A veterinary radiologist who was unaware of actual volumes measured ovarian CT volumes three times, using a laptop computer, pen display tablet, hand-traced regions of interest, and free image analysis software. A statistician selected and performed all tests comparing the actual and CT data. Ovaries were successfully located in all MDCT scans. The iliac arteries and veins, uterus, fallopian tubes, cervix, ureters, urinary bladder, rectum, and colon were also consistently visualized. Large antral follicles were detected in six ovaries. Phantom mean CT volume was 0.702+/-SD 0.504 cc and the mean actual volume was 0.743+/-SD 0.526 cc. Ovary mean CT volume was 0.258+/-SD 0.159 cc and mean water displacement volume was 0.257+/-SD 0.145 cc. For phantoms, the mean coefficient of variation for CT volumes was 2.5%. For ovaries, the least squares mean coefficient of variation for CT volumes was 5.4%. The ovarian CT volume was significantly associated with actual ovarian volume (ICC coefficient 0.79, regression coefficient 0.5, P=0.0006) and the actual ovarian weight (ICC coefficient 0.62, regression coefficient 0.6, P=0.015). There was no association between the CT volume accuracy and mean ovarian CT density (degree of intravenous contrast enhancement), and there was no proportional or fixed bias in the CT volume measurements. Findings from this study indicate that MDCT is a valid non-invasive technique for measuring the ovarian volume in macaques.

  17. Consistent Large-Eddy Simulation of a Temporal Mixing Layer Laden with Evaporating Drops. Part 2; A Posteriori Modelling

    NASA Technical Reports Server (NTRS)

    Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette

    2005-01-01

    Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS-calibrated coefficients. With accurate SGS-flux models, namely scale-similarity and dynamic gradient, the FST model allows up to a 32-fold reduction in computational drops compared to the number of physical drops, without degradation of accuracy; a 64-fold reduction leads to a slight decrease in accuracy.

  18. Wrong Signs in Regression Coefficients

    NASA Technical Reports Server (NTRS)

    McGee, Holly

    1999-01-01

    When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

  19. Quantitative differentiation of breast lesions at 3T diffusion-weighted imaging (DWI) using the ratio of distributed diffusion coefficient (DDC).

    PubMed

    Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin

    2016-12-01

    To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Thermoelectric property measurements with computer controlled systems

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

Top