NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.
Ingalls, Brian; Mincheva, Maya; Roussel, Marc R
2017-07-01
A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Parametrization study of the land multiparameter VTI elastic waveform inversion
NASA Astrophysics Data System (ADS)
He, W.; Plessix, R.-É.; Singh, S.
2018-06-01
Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
NASA Technical Reports Server (NTRS)
Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.
1975-01-01
The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.
Mindfulness, Empathy, and Intercultural Sensitivity amongst Undergraduate Students
ERIC Educational Resources Information Center
Menardo, Dayne Arvin
2017-01-01
This study examined the relationships amongst mindfulness, empathy, and intercultural sensitivity. Non-parametric analysis were conducted through Spearman and Hayes's PROCESS bootstrapping to examine the relationship between mindfulness and intercultural sensitivity, and whether empathy mediates the relationship between mindfulness and…
Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.
2016-01-01
We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Solid state SPS microwave generation and transmission study. Volume 1: Phase 2
NASA Technical Reports Server (NTRS)
Maynard, O. E.
1980-01-01
The solid state sandwich concept for Solar Power Station (SPS) was investigated. The design effort concentrated on the spacetenna, but did include some system analysis for parametric comparison reasons. The study specifically included definition and math modeling of basic solid state microwave devices, an initial conceptual subsystems and system design, sidelobe control and system selection, an assessment of selected system concept and parametric solid state microwave power transmission system data relevant to the SPS concept. Although device efficiency was not a goal, the sensitivities to design of this efficiency were parametrically treated. Sidelobe control consisted of various single step tapers, multistep tapers, and Gaussian tapers. A preliminary assessment of a hybrid concept using tubes and solid state is also included. There is a considerable amount of thermal analysis provided with emphasis on sensitivities to waste heat radiator form factor, emissivity, absorptivity, amplifier efficiency, material and junction temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Solid state SPS microwave generation and transmission study. Volume 2, phase 2: Appendices
NASA Technical Reports Server (NTRS)
Maynard, O. E.
1980-01-01
The solid state sandwich concept for SPS was further defined. The design effort concentrated on the spacetenna, but did include some system analysis for parametric comparison reasons. Basic solid state microwave devices were defined and modeled. An initial conceptual subsystems and system design was performed as well as sidelobe control and system selection. The selected system concept and parametric solid state microwave power transmission system data were assessed relevant to the SPS concept. Although device efficiency was not a goal, the sensitivities to design of this efficiency were parametrically treated. Sidelobe control consisted of various single step tapers, multistep tapers and Gaussian tapers. A hybrid concept using tubes and solid state was evaluated. Thermal analyses are included with emphasis on sensitivities to waste heat radiator form factor, emissivity, absorptivity, amplifier efficiency, material and junction temperature.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
Kumemura, Momoko; Odake, Tamao; Korenaga, Takashi
2005-06-01
A laser-induced fluorescence microscopic system based on optical parametric oscillation has been constructed as a tunable detector for microchip analysis. The detection limit of sulforhodamine B (Ex. 520 nm, Em. 570 nm) was 0.2 mumol, which was approximately eight orders of magnitude better than with a conventional fluorophotometer. The system was applied to the determination of fluorescence-labeled DNA (Ex. 494 nm, Em. 519 nm) in a microchannel and the detection limit reached a single molecule. These results showed the feasibility of this system as a highly sensitive and tunable fluorescence detector for microchip analysis.
NASA Astrophysics Data System (ADS)
Dolev, A.; Bucher, I.
2018-04-01
Mechanical or electromechanical amplifiers can exploit the high-Q and low noise features of mechanical resonance, in particular when parametric excitation is employed. Multi-frequency parametric excitation introduces tunability and is able to project weak input signals on a selected resonance. The present paper addresses multi degree of freedom mechanical amplifiers or resonators whose analysis and features require treatment of the spatial as well as temporal behavior. In some cases, virtual electronic coupling can alter the given topology of the resonator to better amplify specific inputs. An analytical development is followed by a numerical and experimental sensitivity and performance verifications, illustrating the advantages and disadvantages of such topologies.
Murphy, J R; Wasserman, S S; Baqar, S; Schlesinger, L; Ferreccio, C; Lindberg, A A; Levine, M M
1989-01-01
Experiments were performed in Baltimore, Maryland and in Santiago, Chile, to determine the level of Salmonella typhi antigen-driven in vitro lymphocyte replication response which signifies specific acquired immunity to this bacterium and to determine the best method of data analysis and form of data presentation. Lymphocyte replication was measured as incorporation of 3H-thymidine into desoxyribonucleic acid. Data (ct/min/culture) were analyzed in raw form and following log transformation, by non-parametric and parametric statistical procedures. A preference was developed for log-transformed data and discriminant analysis. Discriminant analysis of log-transformed data revealed 3H-thymidine incorporation rates greater than 3,433 for particulate S. typhi, Ty2 antigen stimulated cultures signified acquired immunity at a sensitivity and specificity of 82.7; for soluble S. typhi O polysaccharide antigen-stimulated cultures, ct/min/culture values of greater than 1,237 signified immunity (sensitivity and specificity 70.5%). PMID:2702777
Aerodynamic parameter studies and sensitivity analysis for rotor blades in axial flight
NASA Technical Reports Server (NTRS)
Chiu, Y. Danny; Peters, David A.
1991-01-01
The analytical capability is offered for aerodynamic parametric studies and sensitivity analyses of rotary wings in axial flight by using a 3-D undistorted wake model in curved lifting line theory. The governing equations are solved by both the Multhopp Interpolation technique and the Vortex Lattice method. The singularity from the bound vortices is eliminated through the Hadamard's finite part concept. Good numerical agreement between both analytical methods and finite differences methods are found. Parametric studies were made to assess the effects of several shape variables on aerodynamic loads. It is found, e.g., that a rotor blade with out-of-plane and inplane curvature can theoretically increase lift in the inboard and outboard regions respectively without introducing an additional induced drag.
NASA Astrophysics Data System (ADS)
Voss, Paul L.; Köprülü, Kahraman G.; Kumar, Prem
2006-04-01
We present a quantum theory of nondegenerate phase-sensitive parametric amplification in a χ(3) nonlinear medium. The nonzero response time of the Kerr (χ(3)) nonlinearity determines the quantum-limited noise figure of χ(3) parametric amplification, as well as the limit on quadrature squeezing. This nonzero response time of the nonlinearity requires coupling of the parametric process to a molecular vibration phonon bath, causing the addition of excess noise through spontaneous Raman scattering. We present analytical expressions for the quantum-limited noise figure of frequency nondegenerate and frequency degenerate χ(3) parametric amplifiers operated as phase-sensitive amplifiers. We also present results for frequency nondegenerate quadrature squeezing. We show that our nondegenerate squeezing theory agrees with the degenerate squeezing theory of Boivin and Shapiro as degeneracy is approached. We have also included the effect of linear loss on the phase-sensitive process.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Sensitivity analysis of the space shuttle to ascent wind profiles
NASA Technical Reports Server (NTRS)
Smith, O. E.; Austin, L. D., Jr.
1982-01-01
A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.
Parametric Covariance Model for Horizon-Based Optical Navigation
NASA Technical Reports Server (NTRS)
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
NASA Astrophysics Data System (ADS)
Harshan, Suraj
The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2018-01-01
A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.
Dynamic analysis of Apollo-Salyut/Soyuz docking
NASA Technical Reports Server (NTRS)
Schliesing, J. A.
1972-01-01
The use of a docking-system computer program in analyzing the dynamic environment produced by two impacting spacecraft and the attitude control systems is discussed. Performance studies were conducted to determine the mechanism load and capture sensitivity to parametric changes in the initial impact conditions. As indicated by the studies, capture latching is most sensitive to vehicle angular-alinement errors and is least sensitive to lateral-miss error. As proved by load-sensitivity studies, peak loads acting on the Apollo spacecraft are considerably lower than the Apollo design-limit loads.
Sensitivity analysis of hydrodynamic stability operators
NASA Technical Reports Server (NTRS)
Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.
1992-01-01
The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.
Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?
Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A
2010-03-01
Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.
Breast-Lesion Characterization using Textural Features of Quantitative Ultrasound Parametric Maps.
Sadeghi-Naini, Ali; Suraweera, Harini; Tran, William Tyler; Hadizad, Farnoosh; Bruni, Giancarlo; Rastegar, Rashin Fallah; Curpen, Belinda; Czarnota, Gregory J
2017-10-20
This study evaluated, for the first time, the efficacy of quantitative ultrasound (QUS) spectral parametric maps in conjunction with texture-analysis techniques to differentiate non-invasively benign versus malignant breast lesions. Ultrasound B-mode images and radiofrequency data were acquired from 78 patients with suspicious breast lesions. QUS spectral-analysis techniques were performed on radiofrequency data to generate parametric maps of mid-band fit, spectral slope, spectral intercept, spacing among scatterers, average scatterer diameter, and average acoustic concentration. Texture-analysis techniques were applied to determine imaging biomarkers consisting of mean, contrast, correlation, energy and homogeneity features of parametric maps. These biomarkers were utilized to classify benign versus malignant lesions with leave-one-patient-out cross-validation. Results were compared to histopathology findings from biopsy specimens and radiology reports on MR images to evaluate the accuracy of technique. Among the biomarkers investigated, one mean-value parameter and 14 textural features demonstrated statistically significant differences (p < 0.05) between the two lesion types. A hybrid biomarker developed using a stepwise feature selection method could classify the legions with a sensitivity of 96%, a specificity of 84%, and an AUC of 0.97. Findings from this study pave the way towards adapting novel QUS-based frameworks for breast cancer screening and rapid diagnosis in clinic.
Modeling, Modal Properties, and Mesh Stiffness Variation Instabilities of Planetary Gears
NASA Technical Reports Server (NTRS)
Parker, Robert G.; Lin, Jian; Krantz, Timothy L. (Technical Monitor)
2001-01-01
Planetary gear noise and vibration are primary concerns in their applications in helicopters, automobiles, aircraft engines, heavy machinery and marine vehicles. Dynamic analysis is essential to the noise and vibration reduction. This work analytically investigates some critical issues and advances the understanding of planetary gear dynamics. A lumped-parameter model is built for the dynamic analysis of general planetary gears. The unique properties of the natural frequency spectra and vibration modes are rigorously characterized. These special structures apply for general planetary gears with cyclic symmetry and, in practically important case, systems with diametrically opposed planets. The special vibration properties are useful for subsequent research. Taking advantage of the derived modal properties, the natural frequency and vibration mode sensitivities to design parameters are investigated. The key parameters include mesh stiffnesses, support/bearing stiffnesses, component masses, moments of inertia, and operating speed. The eigen-sensitivities are expressed in simple, closed-form formulae associated with modal strain and kinetic energies. As disorders (e.g., mesh stiffness variation. manufacturing and assembling errors) disturb the cyclic symmetry of planetary gears, their effects on the free vibration properties are quantitatively examined. Well-defined veering rules are derived to identify dramatic changes of natural frequencies and vibration modes under parameter variations. The knowledge of free vibration properties, eigen-sensitivities, and veering rules provide important information to effectively tune the natural frequencies and optimize structural design to minimize noise and vibration. Parametric instabilities excited by mesh stiffness variations are analytically studied for multi-mesh gear systems. The discrepancies of previous studies on parametric instability of two-stage gear chains are clarified using perturbation and numerical methods. The operating conditions causing parametric instabilities are expressed in closed-form suitable for design guidance. Using the well-defined modal properties of planetary gears, the effects of mesh parameters on parametric instability are analytically identified. Simple formulae are obtained to suppress particular instabilities by adjusting contact ratios and mesh phasing.
Hwang, Eunjoo; Hu, Jingwen; Chen, Cong; Klein, Katelyn F; Miller, Carl S; Reed, Matthew P; Rupp, Jonathan D; Hallman, Jason J
2016-11-01
Occupant stature and body shape may have significant effects on injury risks in motor vehicle crashes, but the current finite element (FE) human body models (HBMs) only represent occupants with a few sizes and shapes. Our recent studies have demonstrated that, by using a mesh morphing method, parametric FE HBMs can be rapidly developed for representing a diverse population. However, the biofidelity of those models across a wide range of human attributes has not been established. Therefore, the objectives of this study are 1) to evaluate the accuracy of HBMs considering subject-specific geometry information, and 2) to apply the parametric HBMs in a sensitivity analysis for identifying the specific parameters affecting body responses in side impact conditions. Four side-impact tests with two male post-mortem human subjects (PMHSs) were selected to evaluate the accuracy of the geometry and impact responses of the morphed HBMs. For each PMHS test, three HBMs were simulated to compare with the test results: the original Total Human Model for Safety (THUMS) v4.01 (O-THUMS), a parametric THUMS (P-THUMS), and a subject-specific THUMS (S-THUMS). The P-THUMS geometry was predicted from only age, sex, stature, and BMI using our statistical geometry models of skeleton and body shape, while the S-THUMS geometry was based on each PMHS's CT data. The simulation results showed a preliminary trend that the correlations between the PTHUMS- predicted impact responses and the four PMHS tests (mean-CORA: 0.84, 0.78, 0.69, 0.70) were better than those between the O-THUMS and the normalized PMHS responses (mean-CORA: 0.74, 0.72, 0.55, 0.63), while they are similar to the correlations between S-THUMS and the PMHS tests (mean-CORA: 0.85, 0.85, 0.67, 0.72). The sensitivity analysis using the PTHUMS showed that, in side impact conditions, the HBM skeleton and body shape geometries as well as the body posture were more important in modeling the occupant impact responses than the bone and soft tissue material properties and the padding stiffness with the given parameter ranges. More investigations are needed to further support these findings.
Shuttle cryogenic supply system optimization study. Volume 1: Management supply, sections 1 - 3
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of the cryogenic supply system for use on space shuttle vehicles was conducted. The major outputs of the analysis are: (1) evaluations of subsystem and integrated system concepts, (2) selection of representative designs, (3) parametric data and sensitivity studies, (4) evaluation of cryogenic cooling in environmental control subsystems, and (5) development of mathematical model.
USDA-ARS?s Scientific Manuscript database
Hydrologic models are used to simulate the responses of agricultural systems to different inputs and management strategies to identify alternative management practices to cope up with future climate and/or geophysical changes. The Agricultural Policy/Environmental eXtender (APEX) is a model develope...
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.
Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N
2013-01-01
Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.
2015-01-01
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
Controllability of Free-piston Stirling Engine/linear Alternator Driving a Dynamic Load
NASA Technical Reports Server (NTRS)
Kankam, M. David; Rauch, Jeffrey S.
1994-01-01
This paper presents the dynamic behavior of a Free-Piston Stirling Engine/linear alternator (FPSE/LA) driving a single-phase fractional horse-power induction motor. The controllability and dynamic stability of the system are discussed by means of sensitivity effects of variations in system parameters, engine controller, operating conditions, and mechanical loading on the induction motor. The approach used expands on a combined mechanical and thermodynamic formulation employed in a previous paper. The application of state-space technique and frequency domain analysis enhances understanding of the dynamic interactions. Engine-alternator parametric sensitivity studies, similar to those of the previous paper, are summarized. Detailed discussions are provided for parametric variations which relate to the engine controller and system operating conditions. The results suggest that the controllability of a FPSE-based power system is enhanced by proper operating conditions and built-in controls.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.
Wolf, Elizabeth Skubak; Anderson, David F
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Wang, Zhaolu; Liu, Hongjun; Sun, Qibing; Huang, Nan; Li, Xuefeng
2014-12-15
A width-modulated silicon waveguide is proposed to realize non-degenerate phase sensitive optical parametric amplification. It is found that the relative phase at the input of the phase sensitive amplifier (PSA) θIn-PSA can be tuned by tailoring the width and length of the second segment of the width-modulated silicon waveguide, which will influence the gain in the parametric amplification process. The maximum gain of PSA is larger by 9 dB compared with the phase insensitive amplifier (PIA) gain, and the gain bandwidth of PSA is larger by 35 nm compared with the gain bandwidth of PIA. Our on-chip PSA can find important potential applications in highly integrated optical circuits for optical chip-to-chip communication and computers.
Loring, David W; Larrabee, Glenn J
2006-06-01
The Halstead-Reitan Battery has been instrumental in the development of neuropsychological practice in the United States. Although Reitan administered both the Wechsler-Bellevue Intelligence Scale and Halstead's test battery when evaluating Halstead's theory of biologic intelligence, the relative sensitivity of each test battery to brain damage continues to be an area of controversy. Because Reitan did not perform direct parametric analysis to contrast group performances, we reanalyze Reitan's original validation data from both Halstead (Reitan, 1955) and Wechsler batteries (Reitan, 1959a) and calculate effect sizes and probability levels using traditional parametric approaches. Eight of the 10 tests comprising Halstead's original Impairment Index, as well as the Impairment Index itself, statistically differentiated patients with unequivocal brain damage from controls. In addition, 13 of 14 Wechsler measures including Full-Scale IQ also differed statistically between groups (Brain Damage Full-Scale IQ = 96.2; Control Group Full Scale IQ = 112.6). We suggest that differences in the statistical properties of each battery (e.g., raw scores vs. standardized scores) likely contribute to classification characteristics including test sensitivity and specificity.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Duarte, João Valente; Faustino, Ricardo; Lobo, Mercês; Cunha, Gil; Nunes, César; Ferreira, Carlos; Januário, Cristina; Castelo-Branco, Miguel
2016-10-01
Machado-Joseph Disease, inherited type 3 spinocerebellar ataxia (SCA3), is the most common form worldwide. Neuroimaging and neuropathology have consistently demonstrated cerebellar alterations. Here we aimed to discover whole-brain functional biomarkers, based on parametric performance-level-dependent signals. We assessed 13 patients with early SCA3 and 14 healthy participants. We used a combined parametric behavioral/functional neuroimaging design to investigate disease fingerprints, as a function of performance levels, coupled with structural MRI and voxel-based morphometry. Functional magnetic resonance imaging (fMRI) was designed to parametrically analyze behavior and neural responses to audio-paced bilateral thumb movements at temporal frequencies of 1, 3, and 5 Hz. Our performance-level-based design probing neuronal correlates of motor coordination enabled the discovery that neural activation and behavior show critical loss of parametric modulation specifically in SCA3, associated with frequency-dependent cortico/subcortical activation/deactivation patterns. Cerebellar/cortical rate-dependent dissociation patterns could clearly differentiate between groups irrespective of grey matter loss. Our findings suggest functional reorganization of the motor network and indicate a possible role of fMRI as a tool to monitor disease progression in SCA3. Accordingly, fMRI patterns proved to be potential biomarkers in early SCA3, as tested by receiver operating characteristic analysis of both behavior and neural activation at different frequencies. Discrimination analysis based on BOLD signal in response to the applied parametric finger-tapping task significantly often reached >80% sensitivity and specificity in single regions-of-interest.Functional fingerprints based on cerebellar and cortical BOLD performance dependent signal modulation can thus be combined as diagnostic and/or therapeutic targets in hereditary ataxia. Hum Brain Mapp 37:3656-3668, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution
NASA Astrophysics Data System (ADS)
Kazei, Vladimir; Alkhalifah, Tariq
2018-05-01
Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.
NASA Astrophysics Data System (ADS)
Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.
2010-12-01
A nanofield-effect transistor (nano-FET) is coupled to a massive piezoelectricity based electromechanical resonator integrated with a parametric amplifier. The mechanical parametric amplifier can enhance the resonator's displacement and the resulting electrical signal is further amplified by the nano-FET. This hybrid amplification scheme yields an increase in the mechanical displacement signal by 70 dB resulting in a force sensitivity of 200 aN Hz-1/2 at 3 K. The mechanical parametric amplifier can also squeeze the displacement noise in one oscillation phase by 5 dB enabling a factor of 4 reduction in the thermomechanical noise force level.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
A PARAMETRIC STUDY OF BCS RF SURFACE IMPEDANCE WITH MAGNETIC FIELD USING THE XIAO CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, Charles E.; Xiao, Binping
2013-09-01
A recent new analysis of field-dependent BCS rf surface impedance based on moving Cooper pairs has been presented.[1] Using this analysis coded in Mathematica TM, survey calculations have been completed which examine the sensitivities of this surface impedance to variation of the BCS material parameters and temperature. The results present a refined description of the "best theoretical" performance available to potential applications with corresponding materials.
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less
NASA Astrophysics Data System (ADS)
Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh
2018-07-01
The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.
Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.
Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan
2014-07-28
Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Variable selection for distribution-free models for longitudinal zero-inflated count responses.
Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M
2016-07-20
Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Parametric amplification in a resonant sensing array
NASA Astrophysics Data System (ADS)
Yie, Zi; Miller, Nicholas J.; Shaw, Steven W.; Turner, Kimberly L.
2012-03-01
We demonstrate parametric amplification of a multidegree of freedom resonant mass sensing array via an applied base motion containing the appropriate frequency content and phases. Applying parametric forcing in this manner is simple and aligns naturally with the vibrational properties of the sensing structure. Using this technique, we observe an increase in the quality factors of the coupled array resonances, which provides an effective means of improving device sensitivity.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B
2014-11-01
The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br
2015-01-15
Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Phase-sensitive fiber-based parametric all-optical switch.
Parra-Cetina, Josué; Kumpera, Aleš; Karlsson, Magnus; Andrekson, Peter A
2015-12-28
We experimentally demonstrate, for the first time, an all-optical switch in a phase-sensitive fiber optic parametric amplifier operated in saturation. We study the effect of phase variation of the signal and idler waves on the pump power depletion. By changing the phase of a 0.9 mW signal/idler pair wave by π/2 rad, a pump power extinction ratio of 30.4 dB is achieved. Static and dynamic characterizations are also performed and time domain results presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Yajuan
2010-04-01
The quark-lepton complementarity (QLC) is very suggestive in understanding possible relations between quark and lepton mixing matrices. We explore the QLC relations in all the possible angle-phase parametrizations and point out that they can approximately hold in five parametrizations. Furthermore, the vanishing of the smallest mixing angles in the Cabibbo-Kobayashi-Maskawa and Pontecorvo-Maki-Nakagawa-Sakata matrices can make sure that the QLC relations exactly hold in those five parametrizations. Finally, the sensitivity of the QLC relations to radiative corrections is also discussed.
Qian, Chunqi; Murphy-Boesch, Joseph; Dodd, Stephen; Koretsky, Alan
2012-09-01
A completely wireless detection coil with an integrated parametric amplifier has been constructed to provide local amplification and transmission of MR signals. The sample coil is one element of a parametric amplifier using a zero-bias diode that mixes the weak MR signal with a strong pump signal that is obtained from an inductively coupled external loop. The NMR sample coil develops current gain via reduction in the effective coil resistance. Higher gain can be obtained by adjusting the level of the pumping power closer to the oscillation threshold, but the gain is ultimately constrained by the bandwidth requirement of MRI experiments. A feasibility study here shows that on a NaCl/D(2) O phantom, (23) Na signals with 20 dB of gain can be readily obtained with a concomitant bandwidth of 144 kHz. This gain is high enough that the integrated coil with parametric amplifier, which is coupled inductively to external loops, can provide sensitivity approaching that of direct wire connection. Copyright © 2012 Wiley Periodicals, Inc.
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Parametric Amplification For Detecting Weak Optical Signals
NASA Technical Reports Server (NTRS)
Hemmati, Hamid; Chen, Chien; Chakravarthi, Prakash
1996-01-01
Optical-communication receivers of proposed type implement high-sensitivity scheme of optical parametric amplification followed by direct detection for reception of extremely weak signals. Incorporates both optical parametric amplification and direct detection into optimized design enhancing effective signal-to-noise ratios during reception in photon-starved (photon-counting) regime. Eliminates need for complexity of heterodyne detection scheme and partly overcomes limitations imposed on older direct-detection schemes by noise generated in receivers and by limits on quantum efficiencies of photodetectors.
Modeling and stochastic analysis of dynamic mechanisms of the perception
NASA Astrophysics Data System (ADS)
Pisarchik, A.; Bashkirtseva, I.; Ryashko, L.
2017-10-01
Modern studies in physiology and cognitive neuroscience consider a noise as an important constructive factor of the brain functionality. Under the adequate noise, the brain can rapidly access different ordered states, and provide decision-making by preventing deadlocks. Bistable dynamic models are often used for the study of the underlying mechanisms of the visual perception. In the present paper, we consider a bistable energy model subject to both additive and parametric noise. Using the catastrophe theory formalism and stochastic sensitivity functions technique, we analyze a response of the equilibria to noise, and study noise-induced transitions between equilibria. We demonstrate and analyse the effect of hysteresis squeezing when the intensity of noise is increased. Stochastic bifurcations connected with the suppression of oscillations by parametric noises are discussed.
Semiautomated Workflow for Clinically Streamlined Glioma Parametric Response Mapping
Keith, Lauren; Ross, Brian D.; Galbán, Craig J.; Luker, Gary D.; Galbán, Stefanie; Zhao, Binsheng; Guo, Xiaotao; Chenevert, Thomas L.; Hoff, Benjamin A.
2017-01-01
Management of glioblastoma multiforme remains a challenging problem despite recent advances in targeted therapies. Timely assessment of therapeutic agents is hindered by the lack of standard quantitative imaging protocols for determining targeted response. Clinical response assessment for brain tumors is determined by volumetric changes assessed at 10 weeks post-treatment initiation. Further, current clinical criteria fail to use advanced quantitative imaging approaches, such as diffusion and perfusion magnetic resonance imaging. Development of the parametric response mapping (PRM) applied to diffusion-weighted magnetic resonance imaging has provided a sensitive and early biomarker of successful cytotoxic therapy in brain tumors while maintaining a spatial context within the tumor. Although PRM provides an earlier readout than volumetry and sometimes greater sensitivity compared with traditional whole-tumor diffusion statistics, it is not routinely used for patient management; an automated and standardized software for performing the analysis and for the generation of a clinical report document is required for this. We present a semiautomated and seamless workflow for image coregistration, segmentation, and PRM classification of glioblastoma multiforme diffusion-weighted magnetic resonance imaging scans. The software solution can be integrated using local hardware or performed remotely in the cloud while providing connectivity to existing picture archive and communication systems. This is an important step toward implementing PRM analysis of solid tumors in routine clinical practice. PMID:28286871
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Wang, Zhiqiang
This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less
Numerical parametric studies of spray combustion instability
NASA Technical Reports Server (NTRS)
Pindera, M. Z.
1993-01-01
A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
Systems design and analysis of the microwave radiometer spacecraft
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1981-01-01
Systems design and analysis data were generated for microwave radiometer spacecraft concept using the Large Advanced Space Systems (LASS) computer aided design and analysis program. Parametric analyses were conducted for perturbations off the nominal-orbital-altitude/antenna-reflector-size and for control/propulsion system options. Optimized spacecraft mass, structural element design, and on-orbit loading data are presented. Propulsion and rigid-body control systems sensitivities to current and advanced technology are established. Spacecraft-induced and environmental effects on antenna performance (surface accuracy, defocus, and boresight off-set) are quantified and structured material frequencies and modal shapes are defined.
Nitzan, Sarah H.; Zega, Valentina; Li, Mo; Ahn, Chae H.; Corigliano, Alberto; Kenny, Thomas W.; Horsley, David A.
2015-01-01
Parametric amplification, resulting from intentionally varying a parameter in a resonator at twice its resonant frequency, has been successfully employed to increase the sensitivity of many micro- and nano-scale sensors. Here, we introduce the concept of self-induced parametric amplification, which arises naturally from nonlinear elastic coupling between the degenerate vibration modes in a micromechanical disk-resonator, and is not externally applied. The device functions as a gyroscope wherein angular rotation is detected from Coriolis coupling of elastic vibration energy from a driven vibration mode into a second degenerate sensing mode. While nonlinear elasticity in silicon resonators is extremely weak, in this high quality-factor device, ppm-level nonlinear elastic effects result in an order-of-magnitude increase in the observed sensitivity to Coriolis force relative to linear theory. Perfect degeneracy of the primary and secondary vibration modes is achieved through electrostatic frequency tuning, which also enables the phase and frequency of the parametric coupling to be varied, and we show that the resulting phase and frequency dependence of the amplification follow the theory of parametric resonance. We expect that this phenomenon will be useful for both fundamental studies of dynamic systems with low dissipation and for increasing signal-to-noise ratio in practical applications such as gyroscopes. PMID:25762243
Nitzan, Sarah H; Zega, Valentina; Li, Mo; Ahn, Chae H; Corigliano, Alberto; Kenny, Thomas W; Horsley, David A
2015-03-12
Parametric amplification, resulting from intentionally varying a parameter in a resonator at twice its resonant frequency, has been successfully employed to increase the sensitivity of many micro- and nano-scale sensors. Here, we introduce the concept of self-induced parametric amplification, which arises naturally from nonlinear elastic coupling between the degenerate vibration modes in a micromechanical disk-resonator, and is not externally applied. The device functions as a gyroscope wherein angular rotation is detected from Coriolis coupling of elastic vibration energy from a driven vibration mode into a second degenerate sensing mode. While nonlinear elasticity in silicon resonators is extremely weak, in this high quality-factor device, ppm-level nonlinear elastic effects result in an order-of-magnitude increase in the observed sensitivity to Coriolis force relative to linear theory. Perfect degeneracy of the primary and secondary vibration modes is achieved through electrostatic frequency tuning, which also enables the phase and frequency of the parametric coupling to be varied, and we show that the resulting phase and frequency dependence of the amplification follow the theory of parametric resonance. We expect that this phenomenon will be useful for both fundamental studies of dynamic systems with low dissipation and for increasing signal-to-noise ratio in practical applications such as gyroscopes.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Lausch, Anthony; Yeung, Timothy Pok-Chi; Chen, Jeff; Law, Elton; Wang, Yong; Urbini, Benedetta; Donelli, Filippo; Manco, Luigi; Fainardi, Enrico; Lee, Ting-Yim; Wong, Eugene
2017-11-01
Parametric response map (PRM) analysis of functional imaging has been shown to be an effective tool for early prediction of cancer treatment outcomes and may also be well-suited toward guiding personalized adaptive radiotherapy (RT) strategies such as sub-volume boosting. However, the PRM method was primarily designed for analysis of longitudinally acquired pairs of single-parameter image data. The purpose of this study was to demonstrate the feasibility of a generalized parametric response map analysis framework, which enables analysis of multi-parametric data while maintaining the key advantages of the original PRM method. MRI-derived apparent diffusion coefficient (ADC) and relative cerebral blood volume (rCBV) maps acquired at 1 and 3-months post-RT for 19 patients with high-grade glioma were used to demonstrate the algorithm. Images were first co-registered and then standardized using normal tissue image intensity values. Tumor voxels were then plotted in a four-dimensional Cartesian space with coordinate values equal to a voxel's image intensity in each of the image volumes and an origin defined as the multi-parametric mean of normal tissue image intensity values. Voxel positions were orthogonally projected onto a line defined by the origin and a pre-determined response vector. The voxels are subsequently classified as positive, negative or nil, according to whether projected positions along the response vector exceeded a threshold distance from the origin. The response vector was selected by identifying the direction in which the standard deviation of tumor image intensity values was maximally different between responding and non-responding patients within a training dataset. Voxel classifications were visualized via familiar three-class response maps and then the fraction of tumor voxels associated with each of the classes was investigated for predictive utility analogous to the original PRM method. Independent PRM and MPRM analyses of the contrast-enhancing lesion (CEL) and a 1 cm shell of surrounding peri-tumoral tissue were performed. Prediction using tumor volume metrics was also investigated. Leave-one-out cross validation (LOOCV) was used in combination with permutation testing to assess preliminary predictive efficacy and estimate statistically robust P-values. The predictive endpoint was overall survival (OS) greater than or equal to the median OS of 18.2 months. Single-parameter PRM and multi-parametric response maps (MPRMs) were generated for each patient and used to predict OS via the LOOCV. Tumor volume metrics (P ≥ 0.071 ± 0.01) and single-parameter PRM analyses (P ≥ 0.170 ± 0.01) were not found to be predictive of OS within this study. MPRM analysis of the peri-tumoral region but not the CEL was found to be predictive of OS with a classification sensitivity, specificity and accuracy of 80%, 100%, and 89%, respectively (P = 0.001 ± 0.01). The feasibility of a generalized MPRM analysis framework was demonstrated with improved prediction of overall survival compared to the original single-parameter method when applied to a glioblastoma dataset. The proposed algorithm takes the spatial heterogeneity in multi-parametric response into consideration and enables visualization. MPRM analysis of peri-tumoral regions was shown to have predictive potential supporting further investigation of a larger glioblastoma dataset. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Lovell, T. Alan; Schmidt, D. K.
1994-03-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
NASA Technical Reports Server (NTRS)
Lovell, T. Alan; Schmidt, D. K.
1994-01-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
NASA Astrophysics Data System (ADS)
Romero, C.; McWilliam, M.; Macías-Pérez, J.-F.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; de Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.
2018-04-01
Context. In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium. Aim. We seek to recover the non-parametric pressure profiles of the high redshift (z = 0.89) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted generalized Navaro-Frenk-White (gNFW) profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: 0.05 R500 < r < 1.1 R500. This is a wider range of spatial scales than is typically recovered by SZ instruments. Similar analyses will be possible with the new generation of SZ instruments such as NIKA2 and MUSTANG2.
Parametric sensitivity analysis of leachate transport simulations at landfills.
Bou-Zeid, E; El-Fadel, M
2004-01-01
This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Phase-sensitive, through-amplification with a double-pumped JPC
NASA Astrophysics Data System (ADS)
Sliwa, K. M.; Hatridge, M.; Frattini, N. E.; Narla, A.; Shankar, S.; Devoret, M. H.
The Josephson Parametric Converter (JPC) is now routinely used as a quantum-limited signal processing device for superconducting qubit experiments. The JPC consists of two modes, the signal and the idler, that are coupled by a ring of Josephson junctions that implements a non-degenerate, three-wave mixing process. This device is conventionally operated as either a phase-preserving parametric amplifier, or a coherent frequency converter, by pumping it at the sum or difference of the signal and idler frequencies, respectively. Here we present a novel double-pumping scheme based on theory by Metelmann and Clerk where a coherent conversion process and a gain process are simultaneously imposed between the signal and idler modes. The interference of these two processes results in a phase-sensitive amplifier with only forward gain, and which breaks the traditional gain-bandwidth limit of parametric amplification. We present results on phase-sensitive amplification with increased bandwidth, and on noise performance and dynamic range that are comparable to the traditional mode of operation. Work supported by ARO, AFOSR, NSF and YINQE.
A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series
NASA Astrophysics Data System (ADS)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2016-02-01
Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A
2011-03-29
During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Parametric resonance in tunable superconducting cavities
NASA Astrophysics Data System (ADS)
Wustmann, Waltraut; Shumeiko, Vitaly
2013-05-01
We develop a theory of parametric resonance in tunable superconducting cavities. The nonlinearity introduced by the superconducting quantum interference device (SQUID) attached to the cavity and damping due to connection of the cavity to a transmission line are taken into consideration. We study in detail the nonlinear classical dynamics of the cavity field below and above the parametric threshold for the degenerate parametric resonance, featuring regimes of multistability and parametric radiation. We investigate the phase-sensitive amplification of external signals on resonance, as well as amplification of detuned signals, and relate the amplifier performance to that of linear parametric amplifiers. We also discuss applications of the device for dispersive qubit readout. Beyond the classical response of the cavity, we investigate small quantum fluctuations around the amplified classical signals. We evaluate the noise power spectrum both for the internal field in the cavity and the output field. Other quantum-statistical properties of the noise are addressed such as squeezing spectra, second-order coherence, and two-mode entanglement.
Neural network representation and learning of mappings and their derivatives
NASA Technical Reports Server (NTRS)
White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald
1991-01-01
Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Tsamados, Michel; Feltham, Daniel; Petty, Alek; Schroeder, David; Flocco, Daniela
2015-10-13
We present a modelling study of processes controlling the summer melt of the Arctic sea ice cover. We perform a sensitivity study and focus our interest on the thermodynamics at the ice-atmosphere and ice-ocean interfaces. We use the Los Alamos community sea ice model CICE, and additionally implement and test three new parametrization schemes: (i) a prognostic mixed layer; (ii) a three equation boundary condition for the salt and heat flux at the ice-ocean interface; and (iii) a new lateral melt parametrization. Recent additions to the CICE model are also tested, including explicit melt ponds, a form drag parametrization and a halodynamic brine drainage scheme. The various sea ice parametrizations tested in this sensitivity study introduce a wide spread in the simulated sea ice characteristics. For each simulation, the total melt is decomposed into its surface, bottom and lateral melt components to assess the processes driving melt and how this varies regionally and temporally. Because this study quantifies the relative importance of several processes in driving the summer melt of sea ice, this work can serve as a guide for future research priorities. © 2015 The Author(s).
Crises, noise, and tipping in the Hassell population model
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2018-03-01
We consider a problem of the analysis of the noise-induced tipping in population systems. To study this phenomenon, we use Hassell-type system with Allee effect as a conceptual model. A mathematical investigation of the tipping is connected with the analysis of the crisis bifurcations, both boundary and interior. In the parametric study of the abrupt changes in dynamics related to the noise-induced extinction and transition from order to chaos, the stochastic sensitivity function technique and confidence domains are used. The effectiveness of the suggested approach to detect early warnings of critical stochastic transitions is demonstrated.
Optical Parametric Amplification of Single Photon: Statistical Properties and Quantum Interference
NASA Astrophysics Data System (ADS)
Xu, Xue-Xiang; Yuan, Hong-Chun
2014-05-01
By using phase space method, we theoretically investigate the quantum statistical properties and quantum interference of optical parametric amplification of single photon. The statistical properties, such as the Wigner function (WF), average photon number, photon number distribution and parity, are derived analytically for the fields of the two output ports. The results indicate that the fields in the output ports are multiphoton states rather than single photon state due to the amplification of the optical parametric amplifiers (OPA). In addition, the phase sensitivity is also examined by using the detection scheme of parity measurement.
NASA Astrophysics Data System (ADS)
Bekkouche, Toufik; Bouguezel, Saad
2018-03-01
We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
Analysis of Multiple Cracks in an Infinite Functionally Graded Plate
NASA Technical Reports Server (NTRS)
Shbeeb, N. I.; Binienda, W. K.; Kreider, K. L.
1999-01-01
A general methodology was constructed to develop the fundamental solution for a crack embedded in an infinite non-homogeneous material in which the shear modulus varies exponentially with the y coordinate. The fundamental solution was used to generate a solution to fully interactive multiple crack problems for stress intensity factors and strain energy release rates. Parametric studies were conducted for two crack configurations. The model displayed sensitivity to crack distance, relative angular orientation, and to the coefficient of nonhomogeneity.
Analysis of combustion instability in liquid fuel rocket motors. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wong, K. W.
1979-01-01
The development of an analytical technique used in the solution of nonlinear velocity-sensitive combustion instability problems is presented. The Galerkin method was used and proved successful. The pressure wave forms exhibit a strong second harmonic distortion and a variety of behaviors are possible depending on the nature of the combustion process and the parametric values involved. A one dimensional model provides insight into the problem by allowing a comparison of Galerkin solutions with more exact finite difference computations.
The next detectors for gravitational wave astronomy
NASA Astrophysics Data System (ADS)
Blair, David; Ju, Li; Zhao, ChunNong; Wen, LinQing; Miao, HaiXing; Cai, RongGen; Gao, JiangRui; Lin, XueChun; Liu, Dong; Wu, Ling-An; Zhu, ZongHong; Hammond, Giles; Paik, Ho Jung; Fafone, Viviana; Rocchi, Alessio; Blair, Carl; Ma, YiQiu; Qin, JiaYi; Page, Michael
2015-12-01
This paper focuses on the next detectors for gravitational wave astronomy which will be required after the current ground based detectors have completed their initial observations, and probably achieved the first direct detection of gravitational waves. The next detectors will need to have greater sensitivity, while also enabling the world array of detectors to have improved angular resolution to allow localisation of signal sources. Sect. 1 of this paper begins by reviewing proposals for the next ground based detectors, and presents an analysis of the sensitivity of an 8 km armlength detector, which is proposed as a safe and cost-effective means to attain a 4-fold improvement in sensitivity. The scientific benefits of creating a pair of such detectors in China and Australia is emphasised. Sect. 2 of this paper discusses the high performance suspension systems for test masses that will be an essential component for future detectors, while sect. 3 discusses solutions to the problem of Newtonian noise which arise from fluctuations in gravity gradient forces acting on test masses. Such gravitational perturbations cannot be shielded, and set limits to low frequency sensitivity unless measured and suppressed. Sects. 4 and 5 address critical operational technologies that will be ongoing issues in future detectors. Sect. 4 addresses the design of thermal compensation systems needed in all high optical power interferometers operating at room temperature. Parametric instability control is addressed in sect. 5. Only recently proven to occur in Advanced LIGO, parametric instability phenomenon brings both risks and opportunities for future detectors. The path to future enhancements of detectors will come from quantum measurement technologies. Sect. 6 focuses on the use of optomechanical devices for obtaining enhanced sensitivity, while sect. 7 reviews a range of quantum measurement options.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Optimal frequency-response sensitivity of compressible flow over roughness elements
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.
2017-04-01
Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
A Cartesian parametrization for the numerical analysis of material instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Parametric Methods for Dynamic 11C-Phenytoin PET Studies.
Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A
2017-03-01
In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Asymmetry identification in rigid rotating bodies—Theory and experiment
NASA Astrophysics Data System (ADS)
Bucher, Izhak; Shomer, Ofer
2013-12-01
Asymmetry and anisotropy are important parameters in rotating devices that can cause instability; indicate a manufacturing defect or a developing fault. The present paper discusses an identification method capable of detecting minute levels of asymmetry by exploiting the unique dynamics of parametric excitation caused by asymmetry and rotation. The detection relies on rigid body dynamics without resorting to nonlinear vibration analysis, and the natural dynamics of elastically supported systems is exploited in order to increase the sensitivity to asymmetry. It is possible to isolate asymmetry from other rotation-induced phenomena like unbalance. An asymmetry detection machine which was built in the laboratory demonstrates the method alongside theoretical analysis.
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
Problems of the design of low-noise input devices. [parametric amplifiers
NASA Technical Reports Server (NTRS)
Manokhin, V. M.; Nemlikher, Y. A.; Strukov, I. A.; Sharfov, Y. A.
1974-01-01
An analysis is given of the requirements placed on the elements of parametric centimeter waveband amplifiers for achievement of minimal noise temperatures. A low-noise semiconductor parametric amplifier using germanium parametric diodes for a receiver operating in the 4 GHz band was developed and tested confirming the possibility of satisfying all requirements.
Parametric study of laser photovoltaic energy converters
NASA Technical Reports Server (NTRS)
Walker, G. H.; Heinbockel, J. H.
1987-01-01
Photovoltaic converters are of interest for converting laser power to electrical power in a space-based laser power system. This paper describes a model for photovoltaic laser converters and the application of this model to a neodymium laser silicon photovoltaic converter system. A parametric study which defines the sensitivity of the photovoltaic parameters is described. An optimized silicon photovoltaic converter has an efficiency greater than 50 percent for 1000 W/sq cm of neodymium laser radiation.
NASA Astrophysics Data System (ADS)
Noroozian, Omid
2018-01-01
The current state of the art for some superconducting technologies will be reviewed in the context of a future single-dish submillimeter telescope called AtLAST. The technologies reviews include: 1) Kinetic Inductance Detectors (KIDs), which have now been demonstrated in large-format kilo-pixel arrays with photon background-limited sensitivity suitable for large field of view cameras for wide-field imaging. 2) Parametric amplifiers - specifically the Traveling-Wave Kinetic Inductance (TKIP) amplifier - which has enormous potential to increase sensitivity, bandwidth, and mapping speed of heterodyne receivers, and 3) On-chip spectrometers, which combined with sensitive direct detectors such as KIDs or TESs could be used as Multi-Object Spectrometers on the AtLAST focal plane, and could provide low-medium resolution spectroscopy of 100 objects at a time in each field of view.
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, C.; Fang, W.
2018-04-01
Forest disturbance induced by tropical cyclone often has significant and profound effects on the structure and function of forest ecosystem. Detection and analysis of post-disaster forest disturbance based on remote sensing technology has been widely applied. At present, it is necessary to conduct further quantitative analysis of the magnitude of forest disturbance with the intensity of typhoon. In this study, taking the case of super typhoon Rammasun (201409), we analysed the sensitivity of four common used remote sensing indices and explored the relationship between remote sensing index and corresponding wind speeds based on pre-and post- Landsat-8 OLI (Operational Land Imager) images and a parameterized wind field model. The results proved that NBR is the most sensitive index for the detection of forest disturbance induced by Typhoon Rammasun and the variation of NBR has a significant linear dependence relation with the simulated 3-second gust wind speed.
Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1997-01-01
A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
Parametric study of extended end-plate connection using finite element modeling
NASA Astrophysics Data System (ADS)
Mureşan, Ioana Cristina; Bâlc, Roxana
2017-07-01
End-plate connections with preloaded high strength bolts represent a convenient, fast and accurate solution for beam-to-column joints. The behavior of framework joints build up with this type of connection are sensitive dependent on geometrical and material characteristics of the elements connected. This paper presents results of parametric analyses on the behavior of a bolted extended end-plate connection using finite element modeling program Abaqus. This connection was experimentally tested in the Laboratory of Faculty of Civil Engineering from Cluj-Napoca and the results are briefly reviewed in this paper. The numerical model of the studied connection was described in detail in [1] and provides data for this parametric study.
Wang, Ying; Feng, Chenglian; Liu, Yuedan; Zhao, Yujie; Li, Huixian; Zhao, Tianhui; Guo, Wenjing
2017-02-01
Transition metals in the fourth period of the periodic table of the elements are widely widespread in aquatic environments. They could often occur at certain concentrations to cause adverse effects on aquatic life and human health. Generally, parametric models are mostly used to construct species sensitivity distributions (SSDs), which result in comparison for water quality criteria (WQC) of elements in the same period or group of the periodic table might be inaccurate and the results could be biased. To address this inadequacy, the non-parametric kernel density estimation (NPKDE) with its optimal bandwidths and testing methods were developed for establishing SSDs. The NPKDE was better fit, more robustness and better predicted than conventional normal and logistic parametric density estimations for constructing SSDs and deriving acute HC5 and WQC for transition metals in the fourth period of the periodic table. The decreasing sequence of HC5 values for the transition metals in the fourth period was Ti > Mn > V > Ni > Zn > Cu > Fe > Co > Cr(VI), which were not proportional to atomic number in the periodic table, and for different metals the relatively sensitive species were also different. The results indicated that except for physical and chemical properties there are other factors affecting toxicity mechanisms of transition metals. The proposed method enriched the methodological foundation for WQC. Meanwhile, it also provided a relatively innovative, accurate approach for the WQC derivation and risk assessment of the same group and period metals in aquatic environments to support protection of aquatic organisms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.
2011-01-01
Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367
NASA Technical Reports Server (NTRS)
Tarras, A.
1987-01-01
The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Schmidt, K; Witte, H
1999-11-01
Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.
Parametric manipulation of the conflict signal and control-state adaptation.
Forster, Sarah E; Carter, Cameron S; Cohen, Jonathan D; Cho, Raymond Y
2011-04-01
Mechanisms by which the brain monitors and modulates performance are an important focus of recent research. The conflict-monitoring hypothesis posits that the ACC detects conflict between competing response pathways which, in turn, signals for enhanced control. The N2, an ERP component that has been localized to ACC, has been observed after high conflict stimuli. As a candidate index of the conflict signal, the N2 would be expected to be sensitive to the degree of response conflict present, a factor that depends on both the features of external stimuli and the internal control state. In the present study, we sought to explore the relationship between N2 amplitude and these variables through use of a modified Eriksen flankers task in which target-distracter compatibility was parametrically varied. We hypothesized that greater target-distracter incompatibility would result in higher levels of response conflict, as indexed by both behavior and the N2 component. Consistent with this prediction, there were parametric degradations in behavioral performance and increases in N2 amplitudes with increasing incompatibility. Further, increasingly incompatible stimuli led to the predicted parametric increases in control on subsequent incompatible trials as evidenced by enhanced performance and reduced N2 amplitudes. These findings suggest that the N2 component and associated behavioral performance are finely sensitive to the degree of response conflict present and to the control adjustments that result from modulations in conflict.
Photo-assisted electron emission from illuminated monolayer graphene
NASA Astrophysics Data System (ADS)
Upadhyay Kahaly, M.; Misra, Shikha; Mishra, S. K.
2017-05-01
We establish a formalism to address co-existing and complementing thermionic and photoelectric emission from a monolayer graphene sheet illuminated via monochromatic laser radiation and operating at a finite temperature. Taking into account the two dimensional Fermi-Dirac statistics as is applicable for a graphene sheet, the electron energy redistribution due to thermal agitation via laser irradiation, and Fowler's approach of the electron emission, along with Born's approximation to evaluate the tunneling probability, the expressions for the photoelectric and thermionic emission flux have been derived. The cumulative emission flux is observed to be sensitive to the parametric tuning of the laser and material specifications. Based on the parametric analysis, the photoemission flux is noticed to dominate over its coexisting counterpart thermionic emission flux for smaller values of the material work function, surface temperature, and laser wavelength; the analytical estimates are in reasonably good agreement with the recent experimental observations [Massicotte et al., Nat. Commun. 7, 12174 (2016)]. The results evince the efficient utilization of a graphene layer as a photo-thermionic emitter.
First-Order Parametric Model of Reflectance Spectra for Dyed Fabrics
2016-02-19
Unclassified Unlimited 31 Daniel Aiken (202) 279-5293 Parametric modeling Inverse /direct analysis This report describes a first-order parametric model of...Appendix: Dielectric Response Functions for Dyes Obtained by Inverse Analysis ……………………………...…………………………………………………….19 1 First-Order Parametric...which provides for both their inverse and direct modeling1. The dyes considered contain spectral features that are of interest to the U.S. Navy for
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-01-01
Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parametric versus Cox's model: an illustrative analysis of divorce in Canada.
Balakrishnan, T R; Rao, K V; Krotki, K J; Lapierre-adamcyk, E
1988-06-01
Recent demographic literature clearly recognizes the importance of survival modes in the analysis of cross-sectional event histories. Of the various survival models, Cox's (1972) partial parametric model has been very popular due to its simplicity, and readily available computer software for estimation, sometimes at the cost of precision and parsimony of the model. This paper focuses on parametric failure time models for event history analysis such as Weibell, lognormal, loglogistic, and exponential models. The authors also test the goodness of fit of these parametric models versus the Cox's proportional hazards model taking Kaplan-Meier estimate as base. As an illustration, the authors reanalyze the Canadian Fertility Survey data on 1st marriage dissolution with parametric models. Though these parametric model estimates were not very different from each other, there seemed to be a slightly better fit with loglogistic. When 8 covariates were used in the analysis, it was found that the coefficients were similar in the models, and the overall conclusions about the relative risks would not have been different. The findings reveal that in marriage dissolution, the differences according to demographic and socioeconomic characteristics may be far more important than is generally found in many studies. Therefore, one should not treat the population as homogeneous in analyzing survival probabilities of marriages, other than for cursory analysis of overall trends.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
EEG Correlates of Fluctuation in Cognitive Performance in an Air Traffic Control Task
2014-11-01
using non-parametric statistical analysis to identify neurophysiological patterns due to the time-on-task effect. Significant changes in EEG power...EEG, Cognitive Performance, Power Spectral Analysis , Non-Parametric Analysis Document is available to the public through the Internet...3 Performance Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 EEG
Fan, Zhen; Dani, Melanie; Femminella, Grazia D; Wood, Melanie; Calsolaro, Valeria; Veronese, Mattia; Turkheimer, Federico; Gentleman, Steve; Brooks, David J; Hinz, Rainer; Edison, Paul
2018-07-01
Neuroinflammation and microglial activation play an important role in amnestic mild cognitive impairment (MCI) and Alzheimer's disease. In this study, we investigated the spatial distribution of neuroinflammation in MCI subjects, using spectral analysis (SA) to generate parametric maps and quantify 11 C-PBR28 PET, and compared these with compartmental and other kinetic models of quantification. Thirteen MCI and nine healthy controls were enrolled in this study. Subjects underwent 11 C-PBR28 PET scans with arterial cannulation. Spectral analysis with an arterial plasma input function was used to generate 11 C-PBR28 parametric maps. These maps were then compared with regional 11 C-PBR28 V T (volume of distribution) using a two-tissue compartment model and Logan graphic analysis. Amyloid load was also assessed with 18 F-Flutemetamol PET. With SA, three component peaks were identified in addition to blood volume. The 11 C-PBR28 impulse response function (IRF) at 90 min produced the lowest coefficient of variation. Single-subject analysis using this IRF demonstrated microglial activation in five out of seven amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake revealed a group-wise significant increase in neuroinflammation in amyloid-positive MCI subjects versus HC in multiple cortical association areas, and particularly in the temporal lobe. Interestingly, compartmental analysis detected group-wise increase in 11 C-PBR28 binding in the thalamus of amyloid-positive MCI subjects, while Logan parametric maps did not perform well. This study demonstrates for the first time that spectral analysis can be used to generate parametric maps of 11 C-PBR28 uptake, and is able to detect microglial activation in amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake allow voxel-wise single-subject analysis and could be used to evaluate microglial activation in individual subjects.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Parametric Modelling of As-Built Beam Framed Structure in Bim Environment
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.
2017-02-01
A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Influence of Finite Element Size in Residual Strength Prediction of Composite Structures
NASA Technical Reports Server (NTRS)
Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid
2012-01-01
The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
Methods for comparative evaluation of propulsion system designs for supersonic aircraft
NASA Technical Reports Server (NTRS)
Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.
1976-01-01
The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.
Parametric study of modern airship productivity
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Flaig, K.
1980-01-01
A method for estimating the specific productivity of both hybrid and fully buoyant airships is developed. Various methods of estimating structural weight of deltoid hybrids are discussed and a derived weight estimating relationship is presented. Specific productivity is used as a figure of merit in a parametric study of fully buoyant ellipsoidal and deltoid hybrid semi-buoyant vehicles. The sensitivity of results as a function of assumptions is also determined. No airship configurations were found to have superior specific productivity to transport airplanes.
Laser-Based Remote Sensing of Explosives by a Differential Absorption and Scattering Method
NASA Astrophysics Data System (ADS)
Ayrapetyan, V. S.
2018-01-01
A multifunctional IR parametric laser system is developed and tested for remote detection and identification of atmospheric gases, including explosive and chemically aggressive substances. Calculations and experimental studies of remote determination of the spectroscopic parameters of the best known explosive substances TNT, RDX, and PETN are carried out. The feasibility of high sensitivity detection ( 1 ppm) of these substances with the aid of a multifunctional IR parametric light source by differential absorption and scattering is demonstrated.
Dorrer, C.; Consentino, A.; Cuffney, R.; ...
2017-10-18
Here, we describe a parametric-amplification–based front end for seeding high-energy Nd:glass laser systems. The front end delivers up to 200 mJ by parametric amplification in 2.5-ns flat-in-time pulses tunable over more than 15 nm. Spectral tunability over a range larger than what is typically achieved by laser media at similar energy levels is implemented to investigate cross-beam energy transfer in multibeam target experiments. The front-end operation is simulated to explain the amplified signal’s sensitivity to the input pump and signal. A large variety of amplified waveforms are generated by closed-loop pulse shaping. Various properties and limitations of this front endmore » are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrer, C.; Consentino, A.; Cuffney, R.
Here, we describe a parametric-amplification–based front end for seeding high-energy Nd:glass laser systems. The front end delivers up to 200 mJ by parametric amplification in 2.5-ns flat-in-time pulses tunable over more than 15 nm. Spectral tunability over a range larger than what is typically achieved by laser media at similar energy levels is implemented to investigate cross-beam energy transfer in multibeam target experiments. The front-end operation is simulated to explain the amplified signal’s sensitivity to the input pump and signal. A large variety of amplified waveforms are generated by closed-loop pulse shaping. Various properties and limitations of this front endmore » are discussed.« less
Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M
2015-05-01
The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.
A new simple form of quark mixing matrix
NASA Astrophysics Data System (ADS)
Qin, Nan; Ma, Bo-Qiang
2011-01-01
Although different parametrizations of quark mixing matrix are mathematically equivalent, the consequences of experimental analysis may be distinct. Based on the triminimal expansion of Kobayashi-Maskawa matrix around the unit matrix, we propose a new simple parametrization. Compared with the Wolfenstein parametrization, we find that the new form is not only consistent with the original one in the hierarchical structure, but also more convenient for numerical analysis and measurement of the CP-violating phase. By discussing the relation between our new form and the unitarity boomerang, we point out that along with the unitarity boomerang, this new parametrization is useful in hunting for new physics.
Study of aircraft in intraurban transportation systems, volume 1
NASA Technical Reports Server (NTRS)
Stout, E. G.; Kesling, P. H.; Matteson, H. C.; Sherwood, D. E.; Tuck, W. R., Jr.; Vaughn, L. A.
1971-01-01
An analysis of an effective short range, high density computer transportation system for intraurban systems is presented. The seven county Detroit, Michigan, metropolitan area, was chosen as the scenario for the analysis. The study consisted of an analysis and forecast of the Detroit market through 1985, a parametric analysis of appropriate short haul aircraft concepts and associated ground systems, and a preliminary overall economic analysis of a simplified total system designed to evaluate the candidate vehicles and select the most promising VTOL and STOL aircraft. Data are also included on the impact of advanced technology on the system, the sensitivity of mission performance to changes in aircraft characteristics and system operations, and identification of key problem areas that may be improved by additional research. The approach, logic, and computer models used are adaptable to other intraurban or interurban areas.
NASA Astrophysics Data System (ADS)
Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory
2018-02-01
We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Winkle, W.; Christensen, S.W.; Kauffman, G.
1976-12-01
The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison ofmore » the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.« less
NASA Astrophysics Data System (ADS)
Mahieux, Arnaud; Goldstein, David B.; Varghese, Philip; Trafton, Laurence M.
2017-10-01
The vapor and particulate plumes arising from the southern polar regions of Enceladus are a key signature of what lies below the surface. Multiple Cassini instruments (INMS, CDA, CAPS, MAG, UVIS, VIMS, ISS) measured the gas-particle plume over the warm Tiger Stripe region and there have been several close flybys. Numerous observations also exist of the near-vent regions in the visible and the IR. The most likely source for these extensive geysers is a subsurface liquid reservoir of somewhat saline water and other volatiles boiling off through crevasse-like conduits into the vacuum of space.In this work, we use a DSMC code to simulate the plume as it exits a vent, considering axisymmetric conditions, in a vertical domain extending up to 10 km. Above 10 km altitude, the flow is collisionless and well modeled in a separate free molecular code. We perform a DSMC parametric and sensitivity study of the following vent parameters: vent diameter, outgassed flow density, water gas/water ice mass flow ratio, gas and ice speed, and ice grain diameter. We build parametric expressions of the plume characteristics at the 10 km upper boundary (number density, temperature, velocity) that will be used in a Bayesian inversion algorithm in order to constrain source conditions from fits to plume observations by various instruments on board the Cassini spacecraft and assess the parametric sensitivity study.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
2018-02-01
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.
Liu, Jian; Torres, F A; Ma, Yubo; Zhao, C; Ju, L; Blair, D G; Chao, S; Roch-Jeune, I; Flaminio, R; Michel, C; Liu, K-Y
2014-02-10
Three-mode optoacoustic parametric amplifiers (OAPAs), in which a pair of photon modes are strongly coupled to an acoustic mode, provide a general platform for investigating self-cooling, parametric instability and very sensitive transducers. Their realization requires an optical cavity with tunable transverse modes and a high quality-factor mirror resonator. This paper presents the design of a table-top OAPA based on a near-self-imaging cavity design, using a silicon torsional microresonator. The design achieves a tuning coefficient for the optical mode spacing of 2.46 MHz/mm. This allows tuning of the mode spacing between amplification and self-cooling regimes of the OAPA device. Based on demonstrated resonator parameters (frequencies ∼400 kHz and quality-factors ∼7.5×10(5) we predict that the OAPA can achieve parametric instability with 1.6 μW of input power and mode cooling by a factor of 1.9×10(4) with 30 mW of input power.
First Demonstration of Electrostatic Damping of Parametric Instability at Advanced LIGO
NASA Astrophysics Data System (ADS)
Blair, Carl; Gras, Slawek; Abbott, Richard; Aston, Stuart; Betzwieser, Joseph; Blair, David; DeRosa, Ryan; Evans, Matthew; Frolov, Valera; Fritschel, Peter; Grote, Hartmut; Hardwick, Terra; Liu, Jian; Lormand, Marc; Miller, John; Mullavey, Adam; O'Reilly, Brian; Zhao, Chunnong; Abbott, B. P.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, S. B.; Ananyeva, A.; Appert, S.; Arai, K.; Ballmer, S. W.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Bartos, I.; Batch, J. C.; Bell, A. S.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Bork, R.; Brooks, A. F.; Ciani, G.; Clara, F.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Danzmann, K.; Da Silva Costa, C. F.; Daw, E. J.; DeBra, D.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, T. M.; Factourovich, M.; Fair, H.; Fernández Galiana, A.; Fisher, R. P.; Fulda, P.; Fyffe, M.; Giaime, J. A.; Giardina, K. D.; Goetz, E.; Goetz, R.; Gray, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, E. D.; Hammond, G.; Hanks, J.; Hanson, J.; Harry, G. M.; Heintze, M. C.; Heptonstall, A. W.; Hough, J.; Izumi, K.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kissel, J. S.; Korth, W. Z.; Kuehn, G.; Landry, M.; Lantz, B.; Lockerbie, N. A.; Lundgren, A. P.; MacInnis, M.; Macleod, D. M.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martynov, D. V.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Mendell, G.; Merilh, E. L.; Meyers, P. M.; Mittleman, R.; Moreno, G.; Mueller, G.; Munch, J.; Nuttall, L. K.; Oberling, J.; Oppermann, P.; Oram, Richard J.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Pele, A.; Penn, S.; Phelps, M.; Pierro, V.; Pinto, I.; Principe, M.; Prokhorov, L. G.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Savage, R. L.; Schofield, R. M. S.; Sellers, D.; Shaddock, D. A.; Shaffer, T. J.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Sorazu, B.; Staley, A.; Strain, K. A.; Tanner, D. B.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Weßels, P.; Willke, B.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, Hang; Yu, Haocun; Zhang, L.; Zucker, M. E.; Zweizig, J.; LSC Instrument Authors
2017-04-01
Interferometric gravitational wave detectors operate with high optical power in their arms in order to achieve high shot-noise limited strain sensitivity. A significant limitation to increasing the optical power is the phenomenon of three-mode parametric instabilities, in which the laser field in the arm cavities is scattered into higher-order optical modes by acoustic modes of the cavity mirrors. The optical modes can further drive the acoustic modes via radiation pressure, potentially producing an exponential buildup. One proposed technique to stabilize parametric instability is active damping of acoustic modes. We report here the first demonstration of damping a parametrically unstable mode using active feedback forces on the cavity mirror. A 15 538 Hz mode that grew exponentially with a time constant of 182 sec was damped using electrostatic actuation, with a resulting decay time constant of 23 sec. An average control force of 0.03 nN was required to maintain the acoustic mode at its minimum amplitude.
Linkage mapping of beta 2 EEG waves via non-parametric regression.
Ghosh, Saurabh; Begleiter, Henri; Porjesz, Bernice; Chorlian, David B; Edenberg, Howard J; Foroud, Tatiana; Goate, Alison; Reich, Theodore
2003-04-01
Parametric linkage methods for analyzing quantitative trait loci are sensitive to violations in trait distributional assumptions. Non-parametric methods are relatively more robust. In this article, we modify the non-parametric regression procedure proposed by Ghosh and Majumder [2000: Am J Hum Genet 66:1046-1061] to map Beta 2 EEG waves using genome-wide data generated in the COGA project. Significant linkage findings are obtained on chromosomes 1, 4, 5, and 15 with findings at multiple regions on chromosomes 4 and 15. We analyze the data both with and without incorporating alcoholism as a covariate. We also test for epistatic interactions between regions of the genome exhibiting significant linkage with the EEG phenotypes and find evidence of epistatic interactions between a region each on chromosome 1 and chromosome 4 with one region on chromosome 15. While regressing out the effect of alcoholism does not affect the linkage findings, the epistatic interactions become statistically insignificant. Copyright 2003 Wiley-Liss, Inc.
Scientific guidelines for preservation of samples collected from Mars
NASA Technical Reports Server (NTRS)
Gooding, James L. (Editor)
1990-01-01
The maximum scientific value of Martian geologic and atmospheric samples is retained when the samples are preserved in the conditions that applied prior to their collection. Any sample degradation equates to loss of information. Based on detailed review of pertinent scientific literature, and advice from experts in planetary sample analysis, number values are recommended for key parameters in the environmental control of collected samples with respect to material contamination, temperature, head-space gas pressure, ionizing radiation, magnetic fields, and acceleration/shock. Parametric values recommended for the most sensitive geologic samples should also be adequate to preserve any biogenic compounds or exobiological relics.
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
COBRA ATD minefield detection model initial performance analysis
NASA Astrophysics Data System (ADS)
Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.
2000-08-01
A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
Josephson Parametric Amplifer Based on a Cavity-Embedded Cooper Pair Transistor
NASA Astrophysics Data System (ADS)
Li, Juliang; Rimberg, A. J.
In this experiment a cavity-embedded Cooper-pair transistor (cCPT) is used as a Josephson parametric amplifier. The cCPT consists of a Cooper pair transistor placed at the voltage antinode of a 5.7 GHz shorted quarter-wave resonator so that the CPT provides a galvanic connection between the cavity's central conductor and ground plane, which forms a SQUID loop. Both the flux threading the loop as well as the gate charge can be modulated, and each can provide the parametric pumping. The reflected signal from the cCPT is further amplified by both SLUG and HEMT amplifiers for characterizing the parametric amplification. A first application of the parametric amplification is to improve the charge sensitivity of a single electron charge detector. This can be done either by pumping on a side band or by shifting the charge state of the cCPT near a bifurcation point. Stimulated emission has been also observed when the cCPT is pumped at twice the resonant frequency in the absence of an input signal. This could allow investigation of the dynamic Casimir effect as well as generation of non-classical photon states. Supported by Grants ARO W911NF-13-10377 and NSF DMR 1507400.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Safaei, Mohsen; Meneghini, R Michael; Anton, Steven R
2017-09-01
Total knee arthroplasty (TKA) is a common procedure in the United States; it has been estimated that about 4 million people are currently living with primary knee replacement in this country. Despite huge improvements in material properties, implant design, and surgical techniques, some implants fail a few years after surgery. A lack of information about in vivo kinetics of the knee prevents the establishment of a correlated intra- and postoperative loading pattern in knee implants. In this study, a conceptual design of an ultra high molecular weight (UHMW) knee bearing with embedded piezoelectric transducers is proposed, which is able to measure the reaction forces from knee motion as well as harvest energy to power embedded electronics. A simplified geometry consisting of a disk of UHMW with a single embedded piezoelectric ceramic is used in this work to study the general parametric trends of an instrumented knee bearing. A combined finite element and electromechanical modeling framework is employed to investigate the fatigue behavior of the instrumented bearing and the electromechanical performance of the embedded piezoelectric. The model is validated through experimental testing and utilized for further parametric studies. Parametric studies consist of the investigation of the effects of several dimensional and piezoelectric material parameters on the durability of the bearing and electrical output of the transducers. Among all the parameters, it is shown that adding large fillet radii results in noticeable improvement in the fatigue life of the bearing. Additionally, the design is highly sensitive to the depth of piezoelectric pocket. Finally, using PZT-5H piezoceramics, higher voltage and slightly enhanced fatigue life is achieved.
NASA Astrophysics Data System (ADS)
Safaei, Mohsen; Meneghini, R. Michael; Anton, Steven R.
2017-09-01
Total knee arthroplasty is a common procedure in the United States; it has been estimated that about 4 million people are currently living with primary knee replacement in this country. Despite huge improvements in material properties, implant design, and surgical techniques, some implants fail a few years after surgery. A lack of information about in vivo kinetics of the knee prevents the establishment of a correlated intra- and postoperative loading pattern in knee implants. In this study, a conceptual design of an ultra high molecular weight (UHMW) knee bearing with embedded piezoelectric transducers is proposed, which is able to measure the reaction forces from knee motion as well as harvest energy to power embedded electronics. A simplified geometry consisting of a disk of UHMW with a single embedded piezoelectric ceramic is used in this work to study the general parametric trends of an instrumented knee bearing. A combined finite element and electromechanical modeling framework is employed to investigate the fatigue behavior of the instrumented bearing and the electromechanical performance of the embedded piezoelectric. The model is validated through experimental testing and utilized for further parametric studies. Parametric studies consist of the investigation of the effects of several dimensional and piezoelectric material parameters on the durability of the bearing and electrical output of the transducers. Among all the parameters, it is shown that adding large fillet radii results in noticeable improvement in the fatigue life of the bearing. Additionally, the design is highly sensitive to the depth of piezoelectric pocket. Finally, using PZT-5H piezoceramics, higher voltage and slightly enhanced fatigue life is achieved.
Pharmacokinetics Application in Biophysics Experiments
NASA Astrophysics Data System (ADS)
Millet, Philippe; Lemoigne, Yves
Among the available computerised tomography devices, the Positron Emission Tomography (PET) has the advantage to be sensitive to pico-molar concentrations of radiotracers inside living matter. Devices adapted to small animal imaging are now commercially available and allow us to study the function rather than the structure of living tissues by in vivo analysis. PET methodology, from the physics of electron-positron annihilation to the biophysics involved in tracers, is treated by other authors in this book. The basics of coincidence detection, image reconstruction, spatial resolution and sensitivity are discussed in the paper by R. Ott. The use of compartment analysis combined with pharmacokinetics is described here to illustrate an application to neuroimaging and to show how parametric imaging can bring insight on the in vivo bio-distribution of a radioactive tracer with small animal PET scanners. After reporting on the use of an intracerebral β+ radiosensitive probe (βP), we describe a small animal PET experiment used to measure the density of 5HT 1 a receptors in rat brain.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Resonant dampers for parametric instabilities in gravitational wave detectors
NASA Astrophysics Data System (ADS)
Gras, S.; Fritschel, P.; Barsotti, L.; Evans, M.
2015-10-01
Advanced gravitational wave interferometric detectors will operate at their design sensitivity with nearly ˜1 MW of laser power stored in the arm cavities. Such large power may lead to the uncontrolled growth of acoustic modes in the test masses due to the transfer of optical energy to the mechanical modes of the arm cavity mirrors. These parametric instabilities have the potential to significantly compromise the detector performance and control. Here we present the design of "acoustic mode dampers" that use the piezoelectric effect to reduce the coupling of optical to mechanical energy. Experimental measurements carried on an Advanced LIGO-like test mass have shown a tenfold reduction in the amplitude of several mechanical modes, thus suggesting that this technique can greatly mitigate the impact of parametric instabilities in advanced detectors.
Pretest uncertainty analysis for chemical rocket engine tests
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1987-01-01
A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.
Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M
2012-05-01
The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.
Modeling CO2 degassing and pH in a stream-aquifer system
Choi, J.; Hulseapple, S.M.; Conklin, M.H.; Harvey, J.W.
1998-01-01
Pinal Creek, Arizona receives an inflow of ground water with high dissolved inorganic carbon (57-75 mg/l) and low pH (5.8-6.3). There is an observed increase of in-stream pH from approximately 6.0-7.8 over the 3 km downstream of the point of groundwater inflow. We hypothesized that CO2 gas-exchange was the most important factor causing the pH increase in this stream-aquifer system. An existing transport model, for coupled ground water-surface water systems (OTIS), was modified to include carbonate equilibria and CO2 degassing, used to simulate alkalinity, total dissolved inorganic carbon (C(T)), and pH in Pinal Creek. Because of the non-linear relation between pH and C(T), the modified transport model used the numerical iteration method to solve the non-linearity. The transport model parameters were determined by the injection of two tracers, bromide and propane. The resulting simulations of alkalinity, C(T) and pH reproduced, without fitting, the overall trends in downstream concentrations. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that C(T) and pH in stream water were controlled by the mixing of ground water with stream water and CO2 degassing. The relative importance of these two processes varied spatially depending on the hydrologic conditions, such as stream flow velocity and whether a reach gained or lost stream water caused by the interaction with the ground water. The coupled transport model with CO2 degassing and generalized sensitivity analysis presented in this study can be applied to evaluate carbon transport and pH in other coupled stream-ground water systems.An existing transport model for coupled groundwater-surface water systems was modified to include carbonate equilibria and CO2 degassing. The modified model was used to simulate alkalinity, total dissolved inorganic carbon (CT) and pH in Pinal Creek. The model used the numerical iteration method to solve the nonlinear relation between pH and CT. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that CT and pH in the stream water were controlled by the mixing of groundwater with stream water and CO2 degassing.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
Aerodynamic shape optimization of a HSCT type configuration with improved surface definition
NASA Technical Reports Server (NTRS)
Thomas, Almuttil M.; Tiwari, Surendra N.
1994-01-01
Two distinct parametrization procedures of generating free-form surfaces to represent aerospace vehicles are presented. The first procedure is the representation using spline functions such as nonuniform rational b-splines (NURBS) and the second is a novel (geometrical) parametrization using solutions to a suitably chosen partial differential equation. The main idea is to develop a surface which is more versatile and can be used in an optimization process. Unstructured volume grid is generated by an advancing front algorithm and solutions obtained using an Euler solver. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an automatic differentiator precompiler software tool. Aerodynamic shape optimization of a complete aircraft with twenty four design variables is performed. High speed civil transport aircraft (HSCT) configurations are targeted to demonstrate the process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Yami; Feng, Jingliang; Cao, Leiming
2016-03-28
Beamsplitters have played an important role in quantum optics experiments. They are often used to split and combine two beams, especially in the construct of an interferometer. In this letter, we experimentally implement a nonlinear beamsplitter using a phase-sensitive parametric amplifier, which is based on four-wave mixing in hot rubidium vapor. Here we show that, despite the different frequencies of the two input beams, the output ports of the nonlinear beamsplitter exhibit interference phenomena. We make measurements of the interference fringe visibility and study how various parameters, such as the intensity gain of the amplifier, the intensity ratio of themore » two input beams, and the one and two photon detunings, affect the behavior of the nonlinear beamsplitter. It may find potential applications in quantum metrology and quantum information processing.« less
Estimating causal contrasts involving intermediate variables in the presence of selection bias.
Valeri, Linda; Coull, Brent A
2016-11-20
An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
1 D analysis of Radiative Shock damping by lateral radiative losses
NASA Astrophysics Data System (ADS)
Busquet, Michel; Audit, Edouard
2008-11-01
We have demonstrated the effect of the lateral radiative losses in radiative shocks propagative in layered quasi-planar atmospheres.[1,2] The damping of the precursor is sensitive to the fraction of self-emitted radiation reflected by the walls (called albedo) We have given recently an experimental determination of the wall albedo.[2] For parametric analysis of this effect, we implement lateral losses in the 1D hydro-rad code MULTI [3] and compared results with 2D simulations. [1] S.Leygnac, et al., Phys. Plasmas 13, 113301 (2006) [2] M.Busquet, et al, High Energy Density Plasmas 3, 8-11 (2007); M.Gonzalez, et al, Laser Part. Beams 24, 1-6 (2006) [3] Ramis et al, Comp. Phys. Comm., 49, 475 (1988)
Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald
2017-06-01
The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
The parametric resonance—from LEGO Mindstorms to cold atoms
NASA Astrophysics Data System (ADS)
Kawalec, Tomasz; Sierant, Aleksandra
2017-07-01
We show an experimental setup based on a popular LEGO Mindstorms set, allowing us to both observe and investigate the parametric resonance phenomenon. The presented method is simple but covers a variety of student activities like embedded software development, conducting measurements, data collection and analysis. It may be used during science shows, as part of student projects and to illustrate the parametric resonance in mechanics or even quantum physics, during lectures or classes. The parametrically driven LEGO pendulum gains energy in a spectacular way, increasing its amplitude from 10° to about 100° within a few tens of seconds. We provide also a short description of a wireless absolute orientation sensor that may be used in quantitative analysis of driven or free pendulum movement.
NASA Astrophysics Data System (ADS)
Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui
2016-03-01
Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.
The performance of sample selection estimators to control for attrition bias.
Grasdal, A
2001-07-01
Sample attrition is a potential source of selection bias in experimental, as well as non-experimental programme evaluation. For labour market outcomes, such as employment status and earnings, missing data problems caused by attrition can be circumvented by the collection of follow-up data from administrative registers. For most non-labour market outcomes, however, investigators must rely on participants' willingness to co-operate in keeping detailed follow-up records and statistical correction procedures to identify and adjust for attrition bias. This paper combines survey and register data from a Norwegian randomized field trial to evaluate the performance of parametric and semi-parametric sample selection estimators commonly used to correct for attrition bias. The considered estimators work well in terms of producing point estimates of treatment effects close to the experimental benchmark estimates. Results are sensitive to exclusion restrictions. The analysis also demonstrates an inherent paradox in the 'common support' approach, which prescribes exclusion from the analysis of observations outside of common support for the selection probability. The more important treatment status is as a determinant of attrition, the larger is the proportion of treated with support for the selection probability outside the range, for which comparison with untreated counterparts is possible. Copyright 2001 John Wiley & Sons, Ltd.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
3-D Quantitative Dynamic Contrast Ultrasound for Prostate Cancer Localization.
Schalk, Stefan G; Huang, Jing; Li, Jia; Demi, Libertario; Wijkstra, Hessel; Huang, Pintong; Mischi, Massimo
2018-04-01
To investigate quantitative 3-D dynamic contrast-enhanced ultrasound (DCE-US) and, in particular 3-D contrast-ultrasound dispersion imaging (CUDI), for prostate cancer detection and localization, 43 patients referred for 10-12-core systematic biopsy underwent 3-D DCE-US. For each 3-D DCE-US recording, parametric maps of CUDI-based and perfusion-based parameters were computed. The parametric maps were divided in regions, each corresponding to a biopsy core. The obtained parameters were validated per biopsy location and after combining two or more adjacent regions. For CUDI by correlation (r) and for the wash-in time (WIT), a significant difference in parameter values between benign and malignant biopsy cores was found (p < 0.001). In a per-prostate analysis, sensitivity and specificity were 94% and 50% for r, and 53% and 81% for WIT. Based on these results, it can be concluded that quantitative 3-D DCE-US could aid in localizing prostate cancer. Therefore, we recommend follow-up studies to investigate its value for targeting biopsies. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Recent advances in measurement of the water vapour continuum in the far-infrared spectral region
NASA Astrophysics Data System (ADS)
Green, P. D.; Newman, S. M.; Beeby, R. J.; Murray, J. E.; Pickering, J. C.; Harries, J. E.
2012-06-01
We present a new derivation of the foreign-broadened water vapour continuum in the far-infrared (far-IR) pure rotation band between 24 μm and 120 μm (85-420 cm-1) from field data collected in flight campaigns of the Continuum Absorption by Visible and IR radiation and Atmospheric Relevance (CAVIAR) project with Imperial College's Tropospheric Airborne Fourier Transform Spectrometer (TAFTS) far-IR spectro-radiometer instrument onboard the Facility for Airborne Atmospheric Measurement (FAAM) BAe-146 research aircraft; and compare this new derivation with those recently published in the literature in this spectral band. This new dataset validates the current Mlawer-Tobin-Clough-Kneizys-Davies (MT-CKD) 2.5 model parametrization above 300 cm-1, but indicates the need to strengthen the parametrization below 300 cm-1, by up to 50 per cent at 100 cm-1. Data recorded at a number of flight altitudes have allowed measurements within a wide range of column water vapour environments, greatly increasing the sensitivity of this analysis to the continuum strength.
Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters
NASA Astrophysics Data System (ADS)
Kim, T.; Kim, Y. S.
2017-12-01
The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).
Power flow analysis of two coupled plates with arbitrary characteristics
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
In the last progress report (Feb. 1988) some results were presented for a parametric analysis on the vibrational power flow between two coupled plate structures using the mobility power flow approach. The results reported then were for changes in the structural parameters of the two plates, but with the two plates identical in their structural characteristics. Herein, limitation is removed. The vibrational power input and output are evaluated for different values of the structural damping loss factor for the source and receiver plates. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. The results obtained from the mobility power flow approach are compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between the SEA results and the mobility power flow results. Furthermore, the benefits derived from using the mobility power flow approach are examined.
AASHTO mechanistic-empirical pavement design guide parametric study.
DOT National Transportation Integrated Search
2012-03-01
This study focuses on assessing the robustness of the AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG v 1.1) for rigid pavement : design projects in Wisconsin. The primary tasks conducted in this study included performing sensitivity analys...
Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation
NASA Astrophysics Data System (ADS)
Hillis, James M.; Brainard, David H.
2007-08-01
Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.
Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili
2014-03-01
Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parametric Study of Synthetic-Jet-Based Flow Control on a Vertical Tail Model
NASA Astrophysics Data System (ADS)
Monastero, Marianne; Lindstrom, Annika; Beyar, Michael; Amitay, Michael
2015-11-01
Separation control over the rudder of the vertical tail of a commercial airplane using synthetic-jet-based flow control can lead to a reduction in tail size, with an associated decrease in drag and increase in fuel savings. A parametric, experimental study was undertaken using an array of finite span synthetic jets to investigate the sensitivity of the enhanced vertical tail side force to jet parameters, such as jet spanwise spacing and jet momentum coefficient. A generic wind tunnel model was designed and fabricated to fundamentally study the effects of the jet parameters at varying rudder deflection and model sideslip angles. Wind tunnel results obtained from pressure measurements and tuft flow visualization in the Rensselaer Polytechnic Subsonic Wind Tunnel show a decrease in separation severity and increase in model performance in comparison to the baseline, non-actuated case. The sensitivity to various parameters will be presented.
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Winfield, Jessica M.; Payne, Geoffrey S.; Weller, Alex; deSouza, Nandita M.
2016-01-01
Abstract Multi-parametric magnetic resonance imaging (mpMRI) offers a unique insight into tumor biology by combining functional MRI techniques that inform on cellularity (diffusion-weighted MRI), vascular properties (dynamic contrast-enhanced MRI), and metabolites (magnetic resonance spectroscopy) and has scope to provide valuable information for prognostication and response assessment. Challenges in the application of mpMRI in the clinic include the technical considerations in acquiring good quality functional MRI data, development of robust techniques for analysis, and clinical interpretation of the results. This article summarizes the technical challenges in acquisition and analysis of multi-parametric MRI data before reviewing the key applications of multi-parametric MRI in clinical research and practice. PMID:27748710
Measures of Situation Awareness: An Experimental Evaluation
1991-10-01
occurrence from non- occurrence of the target event, referred to as sensitivity (Macmillan and Creelman , 1990). Because sensitivity declines if pilots are...Pollack and Norman, 1964; see also Craig, 1979; Macmillan and Creelman , 1990). Finally, avoidance failures were measured simply as the number of times...Wesley. Macmillan, N. A., & Creelman , C. D. (1990). Response bias: Characteristics of detection theory, threshold theory, and "non- parametric" indexes
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-01
Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-31
To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.
NASA Technical Reports Server (NTRS)
Stanley, Douglas O.; Unal, Resit; Joyner, C. R.
1992-01-01
The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.
Free energy and hidden barriers of the β-sheet structure of prion protein.
Paz, S Alexis; Abrams, Cameron F
2015-10-13
On-the-fly free-energy parametrization is a new collective variable biasing approach akin to metadynamics with one important distinction: rather than acquiring an accelerated distribution via a history-dependent bias potential, sampling on this distribution is achieved from the beginning of the simulation using temperature-accelerated molecular dynamics. In the present work, we compare the performance of both approaches to compute the free-energy profile along a scalar collective variable measuring the H-bond registry of the β-sheet structure of the mouse Prion protein. Both methods agree on the location of the free-energy minimum, but free-energy profiles from well-tempered metadynamics are subject to a much higher degree of statistical noise due to hidden barriers. The sensitivity of metadynamics to hidden barriers is shown to be a consequence of the history dependence of the bias potential, and we detail the nature of these barriers for the prion β-sheet. In contrast, on-the-fly parametrization is much less sensitive to these barriers and thus displays improved convergence behavior relative to that of metadynamics. While hidden barriers are a frequent and central issue in free-energy methods, on-the-fly free-energy parametrization appears to be a robust and preferable method to confront this issue.
Moderate temperature control technology for a lunar base
NASA Technical Reports Server (NTRS)
Swanson, Theodore D.; Sridhar, K. R.; Gottmann, Matthias
1993-01-01
A parametric analysis is performed to compare different heat pump based thermal control systems for a Lunar Base. Rankine cycle and absorption cycle heat pumps are compared and optimized for a 100 kW cooling load. Variables include the use or lack of an interface heat exchanger, and different operating fluids. Optimization of system mass to radiator rejection temperature is performed. The results indicate a relatively small sensitivity of Rankine cycle system mass to these variables, with optimized system masses of about 6000 kg for the 100 kW thermal load. It is quantitaively demonstrated that absorption based systems are not mass competitive with Rankine systems.
NASA Astrophysics Data System (ADS)
Barone, F.; Giordano, G.; Acernese, F.; Romano, R.
2018-03-01
In this paper, we present some innovative and general strategies for the control of benches and platforms, that the introduction of the new class of monolithic UNISA Folded Pendulum is now making it possible, also in terms of environmental conditions, like ultra-high-vacuum (UHV), cryogenics and harsh environments. In particular, we present and discuss a parametric analysis of the control models in connection with the sensors limitations in terms of sensitivity and band. Finally, we present and discuss some experimental laboratory tests on a laboratory platform, underlining the present advantages and the expected future improvements.
Improving comfort of shoe sole through experiments based on CAD-FEM modeling.
Franciosa, Pasquale; Gerbino, Salvatore; Lanzotti, Antonio; Silvestri, Luca
2013-01-01
It was reported that next to style, comfort is the second key aspect in purchasing footwear. One of the most important components of footwear is the shoe sole, whose design is based on many factors such as foot shape/size, perceived comfort and materials. The present paper focuses on the parametric analysis of a shoe sole to improve the perceived comfort. The sensitivity of geometric and material design factors on comfort degree was investigated by combining real experimental tests and CAD-FEM simulations. The correlation between perceived comfort and physical responses, such as plantar pressures, was estimated by conducting real tests. Four different conditions were analyzed: subjects wearing three commercially available shoes and in a barefoot condition. For each condition, subjects expressed their perceived comfort score. By adopting plantar sensors, the plantar pressures were also monitored. Once given such a correlation, a parametric FEM model of the footwear was developed. In order to better simulate contact at the plantar surface, a detailed FEM model of the foot was also generated from CT scan images. Lastly, a fractional factorial design array was applied to study the sensitivity of different sets of design factors on comfort degree. The findings of this research showed that the sole thickness and its material highly influence perceived comfort. In particular, softer materials and thicker soles contribute to increasing the degree of comfort. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Assessment of three different software systems in the evaluation of dynamic MRI of the breast.
Kurz, K D; Steinhaus, D; Klar, V; Cohnen, M; Wittsack, H J; Saleh, A; Mödder, U; Blondin, D
2009-02-01
The aim was to compare the diagnostic performance and handling of dynamic contrast-enhanced MRI of the breast with two commercial software solutions ("CADstream" and "3TP") and one self-developed software system ("Mammatool"). Identical data sets of dynamic breast MRI from 21 patients were evaluated retrospectively with all three software systems. The exams were classified according to the BI-RADS classification. The number of lesions in the parametric mapping was compared to histology or follow-up of more than 2 years. In addition, 25 quality criteria were judged by 3 independent investigators with a score from 0 to 5. Statistical analysis was performed to document the quality ranking of the different software systems. There were 9 invasive carcinomas, one pure DCIS, one papilloma, one radial scar, three histologically proven changes due to mastopathy, one adenosis and two fibroadenomas. Additionally two patients with enhancing parenchyma followed with MRI for more than 3 years and one scar after breast conserving therapy were included. All malignant lesions were classified as BI-RADS 4 or 5 using all software systems and showed significant enhancement in the parametric mapping. "CADstream" showed the best score on subjective quality criteria. "3TP" showed the lowest number of false-positive results. "Mammatool" produced the lowest number of benign tissues indicated with parametric overlay. All three software programs tested were adequate for sensitive and efficient assessment of dynamic MRI of the breast. Improvements in specificity may be achievable.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less
The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model
Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter
2018-02-27
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less
Turboprop cargo aircraft systems study
NASA Technical Reports Server (NTRS)
Muehlbauer, J. C.; Hewell, J. G., Jr.; Lindenbaum, S. P.; Randall, C. C.; Searle, N.; Stone, R. G., Jr.
1981-01-01
The effects of using advanced turboprop propulsion systems to reduce the fuel consumption and direct operating costs of cargo aircraft were studied, and the impact of these systems on aircraft noise and noise prints around a terminal area was determined. Parametric variations of aircraft and propeller characteristics were investigated to determine their effects on noiseprint areas, fuel consumption, and direct operating costs. From these results, three aircraft designs were selected and subjected to design refinements and sensitivity analyses. Three competitive turbofan aircraft were also defined from parametric studies to provide a basis for comparing the two types of propulsion.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
A Parametric Analysis of HELSTAR
1983-12-01
AFIT/GSO/OS/83D-7 S....A PARAMETRIC ANALYSIS OF HELSTAR THESIS James Miklasevich Captain, USAF AFIT/CSO/OS/83D-7 ’- 3 - Reproduced From J 04. • ’ S...1 Statement of Problem. ...... ................ ......... 3 Objectives of the Research. .... ............ . . . 3 ...Launch Scenarios ................. 39 Launch Sequencel................... 39 Launch Sequence 2 . . . . . .. . . . .. . . . . . 1 Launch Sequence 3
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
ERIC Educational Resources Information Center
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…
Summary of FY17 ParaChoice Accomplishments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Rebecca Sobel; West, Todd H.
As part of analysis support for FCTO, Sandia assesses the factors that influence the future of FCEVs and Hydrogen in the US vehicle fleet. Using ParaChoice, we model competition between FCEVs, conventional vehicles, and other alternative vehicle technologies in order to understand the drivers and sensitivities of adoption of FCEVs. ParaChoice leverages existing tools such as Autonomie (Moawad et al., 2016), AEO (U.S. Energy Information Administration, 2016), and the Macro System Model (Ruth et al., 2009) in order to synthesize a complete picture of the co-evolution of vehicle technology development, energy price evolution, and hydrogen production and pricing, with consumermore » demand for vehicles and fuel. We then assess impacts of FCEV market penetration and hydrogen use on green- house gas (GHG) emissions and petroleum consumption, providing context for the role of policy, technology development, infrastructure, and consumer behavior on the vehicle and fuel mix through parametric and sensitivity analyses.« less
Parametrically excited non-linear multidegree-of-freedom systems with repeated natural frequencies
NASA Astrophysics Data System (ADS)
Tezak, E. G.; Nayfeh, A. H.; Mook, D. T.
1982-12-01
A method for analyzing multidegree-of-freedom systems having a repeated natural frequency subjected to a parametric excitation is presented. Attention is given to the ordering of the various terms (linear and non-linear) in the governing equations. The analysis is based on the method of multiple scales. As a numerical example involving a parametric resonance, panel flutter is discussed in detail in order to illustrate the type of results one can expect to obtain with this analysis. Some of the analytical results are verified by a numerical integration of the governing equations.
The impact of parametrized convection on cloud feedback.
Webb, Mark J; Lock, Adrian P; Bretherton, Christopher S; Bony, Sandrine; Cole, Jason N S; Idelkadi, Abderrahmane; Kang, Sarah M; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D; Zhao, Ming
2015-11-13
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that 'ConvOff' models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. © 2015 The Authors.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
The impact of parametrized convection on cloud feedback
Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming
2015-01-01
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. PMID:26438278
Advanced imaging techniques in brain tumors
2009-01-01
Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
A multimode electromechanical parametric resonator array
Mahboob, I.; Mounaix, M.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.
2014-01-01
Electromechanical resonators have emerged as a versatile platform in which detectors with unprecedented sensitivities and quantum mechanics in a macroscopic context can be developed. These schemes invariably utilise a single resonator but increasingly the concept of an array of electromechanical resonators is promising a wealth of new possibilities. In spite of this, experimental realisations of such arrays have remained scarce due to the formidable challenges involved in their fabrication. In a variation to this approach, we identify 75 harmonic vibration modes in a single electromechanical resonator of which 7 can also be parametrically excited. The parametrically resonating modes exhibit vibrations with only 2 oscillation phases which are used to build a binary information array. We exploit this array to execute a mechanical byte memory, a shift-register and a controlled-NOT gate thus vividly illustrating the availability and functionality of an electromechanical resonator array by simply utilising higher order vibration modes. PMID:24658349
Parametric Characterization of TES Detectors Under DC Bias
NASA Technical Reports Server (NTRS)
Chiao, Meng P.; Smith, Stephen James; Kilbourne, Caroline A.; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.;
2016-01-01
The X-ray integrated field unit (X-IFU) in European Space Agency's (ESA's) Athena mission will be the first high-resolution X-ray spectrometer in space using a large-format transition-edge sensor microcalorimeter array. Motivated by optimization of detector performance for X-IFU, we have conducted an extensive campaign of parametric characterization on transition-edge sensor (TES) detectors with nominal geometries and physical properties in order to establish sensitivity trends relative to magnetic field, dc bias on detectors, operating temperature, and to improve our understanding of detector behavior relative to its fundamental properties such as thermal conductivity, heat capacity, and transition temperature. These results were used for validation of a simple linear detector model in which a small perturbation can be introduced to one or multiple parameters to estimate the error budget for X-IFU. We will show here results of our parametric characterization of TES detectors and briefly discuss the comparison with the TES model.
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
NASA Astrophysics Data System (ADS)
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
NASA Astrophysics Data System (ADS)
Nagpal, Shubhrata; Jain, Nitin Kumar; Sanyal, Shubhashis
2016-01-01
The problem of finding the stress concentration factor of a loaded rectangular plate has offered considerably analytical difficulty. The present work focused on understanding of behavior of isotropic and orthotropic plate subjected to static in-plane loading using finite element method. The complete plate model configuration has been analyzed using finite element method based software ANSYS. In the present work two parameters: thickness to width of plate (T/A) and diameter of hole to width of plate (D/A) have been varied for analysis of stress concentration factor (SCF) and its mitigation. Plates of five different materials have been considered for complete analysis to find out the sensitivity of stress concentration factor. The D/A ratio varied from 0.1 to 0.7 for analysis of SCF and varied from 0.1 to 0.5 for analyzing the mitigation of SCF. 0.01, 0.05 and 0.1 are considered as T/A ratio for all the cases. The results are presented in graphical form and discussed. The mitigation in SCF reported is very encouraging. The SCF is more sensitive to D/A ratio as compared to T/A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.
Purpose: The Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC 2010) survey of radiation dose-volume effects on salivary gland function has called for improved understanding of intragland dose sensitivity and the effectiveness of partial sparing in salivary glands. Regional dose susceptibility of sagittally- and coronally-sub-segmented parotid gland has been studied. Specifically, we examine whether individual consideration of sub-segments leads to improved prediction of xerostomia compared with whole parotid mean dose. Methods: Data from 102 patients treated for head-and-neck cancers at the BC Cancer Agency were used in this study. Whole mouth stimulated saliva was collected before (baseline), threemore » months, and one year after cessation of radiotherapy. Organ volumes were contoured using treatment planning CT images and sub-segmented into regional portions. Both non-parametric (local regression) and parametric (mean dose exponential fitting) methods were employed. A bootstrap technique was used for reliability estimation and cross-comparison. Results: Salivary loss is described well using non-parametric and mean dose models. Parametric fits suggest a significant distinction in dose response between medial-lateral and anterior-posterior aspects of the parotid (p<0.01). Least-squares and least-median squares estimates differ significantly (p<0.00001), indicating fits may be skewed by noise or outliers. Salivary recovery exhibits a weakly arched dose response: the highest recovery is seen at intermediate doses. Conclusions: Salivary function loss is strongly dose dependent. In contrast no useful dose dependence was observed for function recovery. Regional dose dependence was observed, but may have resulted from a bias in dose distributions.« less
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Optimization of the coplanar interdigital capacitive sensor
NASA Astrophysics Data System (ADS)
Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola
2017-02-01
Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.
Parametric Analysis of Light Truck and Automobile Maintenance
DOT National Transportation Integrated Search
1979-05-01
Utilizing the Automotive and Light Truck Service and Repair Data Base developed in the campanion report, parametric analyses were made of the relationships between maintenance costs, schduled and unschduled, and vehicle parameters; body class, manufa...
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
Demographic factors associated with moral sensitivity among nursing students.
Tuvesson, Hanna; Lützén, Kim
2017-11-01
Today's healthcare environment is often characterized by an ethically demanding work situation, and nursing students need to prepare to meet ethical challenges in their future role. Moral sensitivity is an important aspect of the ethical decision-making process, but little is known regarding nursing students' moral sensitivity and its possible development during nursing education. The aims of this study were to investigate moral sensitivity among nursing students, differences in moral sensitivity according to sample sub-group, and the relation between demographic characteristics of nursing students and moral sensitivity. A convenience sample of 299 nursing students from one university completed a questionnaire comprising questions about demographic information and the revised Moral Sensitivity Questionnaire. With the use of SPSS, non-parametric statistics, including logistic regression models, were used to investigate the relationship between demographic characteristics and moral sensitivity. Ethical considerations: The study followed the regulations according to the Swedish Ethical Review Act and was reviewed by the Ethics Committee of South-East Sweden. The findings showed that mean scores of nursing students' moral sensitivity were found in the middle to upper segment of the rating scale. Multivariate analysis showed that gender (odds ratio = 3.32), age (odds ratio = 2.09; 1.73), and parental status (odds ratio = 0.31) were of relevance to nursing students' moral sensitivity. Academic year was found to be unrelated to moral sensitivity. These demographic aspects should be considered when designing ethics education for nursing students. Future studies should continue to investigate moral sensitivity in nursing students, such as if and how various pedagogical strategies in ethics may contribute to moral sensitivity in nursing students.
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
Meinel, Felix G.; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö.; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F.; Pfeiffer, Franz; Nikolaou, Konstantin
2013-01-01
Purpose To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Materials and Methods Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Results Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. Conclusion In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections. PMID:23555692
Meinel, Felix G; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F; Pfeiffer, Franz; Nikolaou, Konstantin
2013-01-01
To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections.
Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1997-01-01
A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.
Photon-phonon parametric oscillation induced by quadratic coupling in an optomechanical resonator
NASA Astrophysics Data System (ADS)
Zhang, Lin; Ji, Fengzhou; Zhang, Xu; Zhang, Weiping
2017-07-01
A direct photon-phonon parametric effect of quadratic coupling on the mean-field dynamics of an optomechanical resonator in the large-scale-movement regime is found and investigated. Under a weak pumping power, the mechanical resonator damps to a steady state with a nonlinear static response sensitively modified by the quadratic coupling. When the driving power increases beyond the static energy balance, the steady states lose their stabilities via Hopf bifurcations, and the resonator produces stable self-sustained oscillation (limit-circle behavior) of discrete energies with step-like amplitudes due to the parametric effect of quadratic coupling, which can be understood roughly by the power balance between gain and loss on the resonator. A further increase in the pumping power can induce a chaotic dynamic of the resonator via a typical routine of period-doubling bifurcation, but which can be stabilized by the parametric effect through an inversion-bifurcation process back to the limit-circle states. The bifurcation-to-inverse-bifurcation transitions are numerically verified by the maximal Lyapunov exponents of the dynamics, which indicate an efficient way of suppressing the chaotic behavior of the optomechanical resonator by quadratic coupling. Furthermore, the parametric effect of quadratic coupling on the dynamic transitions of an optomechanical resonator can be conveniently detected or traced by the output power spectrum of the cavity field.
Emissions from ships in the northwestern United States.
Corbett, James J
2002-03-15
Recent inventory efforts have focused on developing nonroad inventories for emissions modeling and policy insights. Characterizing these inventories geographically and explicitly treating the uncertaintiesthat result from limited emissions testing, incomplete activity and usage data, and other important input parameters currently pose the largest methodological challenges. This paper presents a commercial marine vessel (CMV) emissions inventory for Washington and Oregon using detailed statistics regarding fuel consumption, vessel movements, and cargo volumes for the Columbia and Snake River systems. The inventory estimates emissions for oxides of nitrogen (NOx), particulate matter (PM), and oxides of sulfur (SOx). This analysis estimates that annual NOx emissions from marine transportation in the Columbia and Snake River systems in Washington and Oregon equal 6900 t of NOx (as NO2) per year, 2.6 times greater than previous NO, inventories for this region. Statewide CMV NO, emissions are estimated to be 9,800 t of NOx per year. By relying on a "bottom-up" fuel consumption model that includes vessel characteristics and transit information, the river system inventory may be more accurate than previous estimates. This inventory provides modelers with bounded parametric inputs for sensitivity analysis in pollution modeling. The ability to parametrically model the uncertainty in commercial marine vessel inventories also will help policy-makers determine whether better policy decisions can be enabled through further vessel testing and improved inventory resolution.
Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea
NASA Astrophysics Data System (ADS)
Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li
2017-09-01
Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Effect of Monovalent Ion Parameters on Molecular Dynamics Simulations of G-Quadruplexes.
Havrila, Marek; Stadlbauer, Petr; Islam, Barira; Otyepka, Michal; Šponer, Jiří
2017-08-08
G-quadruplexes (GQs) are key noncanonical DNA and RNA architectures stabilized by desolvated monovalent cations present in their central channels. We analyze extended atomistic molecular dynamics simulations (∼580 μs in total) of GQs with 11 monovalent cation parametrizations, assessing GQ overall structural stability, dynamics of internal cations, and distortions of the G-tetrad geometries. Majority of simulations were executed with the SPC/E water model; however, test simulations with TIP3P and OPC water models are also reported. The identity and parametrization of ions strongly affect behavior of a tetramolecular d[GGG] 4 GQ, which is unstable with several ion parametrizations. The remaining studied RNA and DNA GQs are structurally stable, though the G-tetrad geometries are always deformed by bifurcated H-bonding in a parametrization-specific manner. Thus, basic 10-μs-scale simulations of fully folded GQs can be safely done with a number of cation parametrizations. However, there are parametrization-specific differences and basic force-field errors affecting the quantitative description of ion-tetrad interactions, which may significantly affect studies of the ion-binding processes and description of the GQ folding landscape. Our d[GGG] 4 simulations indirectly suggest that such studies will also be sensitive to the water models. During exchanges with bulk water, the Na + ions move inside the GQs in a concerted manner, while larger relocations of the K + ions are typically separated. We suggest that the Joung-Cheatham SPC/E K + parameters represent a safe choice in simulation studies of GQs, though variation of ion parameters can be used for specific simulation goals.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Four photon parametric amplification. [in unbiased Josephson junction
NASA Technical Reports Server (NTRS)
Parrish, P. T.; Feldman, M. J.; Ohta, H.; Chiao, R. Y.
1974-01-01
An analysis is presented describing four-photon parametric amplification in an unbiased Josephson junction. Central to the theory is the model of the Josephson effect as a nonlinear inductance. Linear, small signal analysis is applied to the two-fluid model of the Josephson junction. The gain, gain-bandwidth product, high frequency limit, and effective noise temperature are calculated for a cavity reflection amplifier. The analysis is extended to multiple (series-connected) junctions and subharmonic pumping.
Impact of meteorology on air quality modeling over the Po valley in northern Italy
NASA Astrophysics Data System (ADS)
Pernigotti, D.; Georgieva, E.; Thunis, P.; Bessagnet, B.
2012-05-01
A series of sensitivity tests has been performed using both a mesoscale meteorological model (MM5) and a chemical transport model (CHIMERE) to better understand the reasons why all models underestimate particulate matter concentrations in the Po valley in winter. Different options are explored to nudge meteorological observations from regulatory networks into MM5 in order to improve model performances, especially during the low wind speed regimes frequently present in this area. The sensitivity of the CHIMERE modeled particulate matter concentrations to these different meteorological inputs are then evaluated for the January 2005 time period. A further analysis of the CHIMERE model results revealed the need of improving the parametrization of the in-cloud scavenging and vertical diffusivity schemes; such modifications are relevant especially when the model is applied under mist, fog and low stratus conditions, which frequently occur in the Po valley during winter. The sensitivity of modeled particulate matter concentrations to turbulence parameters, wind, temperature and cloud liquid water content in one of the most polluted and complex areas in Europe is finally discussed.
Some Advances in Downscaling Probabilistic Climate Forecasts for Agricultural Decision Support
NASA Astrophysics Data System (ADS)
Han, E.; Ines, A.
2015-12-01
Seasonal climate forecasts, commonly provided in tercile-probabilities format (below-, near- and above-normal), need to be translated into more meaningful information for decision support of practitioners in agriculture. In this paper, we will present two new novel approaches to temporally downscale probabilistic seasonal climate forecasts: one non-parametric and another parametric method. First, the non-parametric downscaling approach called FResampler1 uses the concept of 'conditional block sampling' of weather data to create daily weather realizations of a tercile-based seasonal climate forecasts. FResampler1 randomly draws time series of daily weather parameters (e.g., rainfall, maximum and minimum temperature and solar radiation) from historical records, for the season of interest from years that belong to a certain rainfall tercile category (e.g., being below-, near- and above-normal). In this way, FResampler1 preserves the covariance between rainfall and other weather parameters as if conditionally sampling maximum and minimum temperature and solar radiation if that day is wet or dry. The second approach called predictWTD is a parametric method based on a conditional stochastic weather generator. The tercile-based seasonal climate forecast is converted into a theoretical forecast cumulative probability curve. Then the deviates for each percentile is converted into rainfall amount or frequency or intensity to downscale the 'full' distribution of probabilistic seasonal climate forecasts. Those seasonal deviates are then disaggregated on a monthly basis and used to constrain the downscaling of forecast realizations at different percentile values of the theoretical forecast curve. As well as the theoretical basis of the approaches we will discuss sensitivity analysis (length of data and size of samples) of them. In addition their potential applications for managing climate-related risks in agriculture will be shown through a couple of case studies based on actual seasonal climate forecasts for: rice cropping in the Philippines and maize cropping in India and Kenya.
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Methane Measurements from Space: Technical Challenges and Solutions
NASA Technical Reports Server (NTRS)
Riris, Haris; Numata, Kenji; Wu, Stewart; Gonzalez, Brayler; Rodriguez, Michael; Kawa, Stephan; Mao, Jianping
2017-01-01
We report on an airborne demonstration of atmospheric methane (CH4) measurements with an Integrated Path Differential Absorption (IPDA) lidar using an optical parametric oscillator (OPO) and optical parametric amplifier (OPA) laser transmitter and a sensitive avalanche photo detector. The lidar measures the CH4 absorption at multiple, discrete wavelengths around 1650.9 nm. In September 2015, the instrument was deployed on NASAs DC-8 airborne laboratory and measured atmospheric methane over a wide range of topography and weather conditions from altitudes of 3 km to 13 km. In this paper, we will review the results from our flights, and identify areas of improvement.
Optimal Operation of a Josephson Parametric Amplifier for Vacuum Squeezing
NASA Astrophysics Data System (ADS)
Malnou, M.; Palken, D. A.; Vale, Leila R.; Hilton, Gene C.; Lehnert, K. W.
2018-04-01
A Josephson parametric amplifier (JPA) can create squeezed states of microwave light, lowering the noise associated with certain quantum measurements. We experimentally study how the JPA's pump influences the phase-sensitive amplification and deamplification of a coherent tone's amplitude when that amplitude is commensurate with vacuum fluctuations. We predict and demonstrate that, by operating the JPA with a single current pump whose power is greater than the value that maximizes gain, the amplifier distortion is reduced and, consequently, squeezing is improved. Optimizing the singly pumped JPA's operation in this fashion, we directly observe 3.87 ±0.03 dB of vacuum squeezing over a bandwidth of 30 MHz.
Methane measurements from space: technical challenges and solutions
NASA Astrophysics Data System (ADS)
Riris, Haris; Numata, Kenji; Wu, Stewart; Gonzalez, Brayler; Rodriguez, Michael; Kawa, Stephan; Mao, Jianping
2017-05-01
We report on an airborne demonstration of atmospheric methane (CH4) measurements with an Integrated Path Differential Absorption (IPDA) lidar using an optical parametric oscillator (OPO) and optical parametric amplifier (OPA) laser transmitter and a sensitive avalanche photo detector. The lidar measures the CH4 absorption at multiple, discrete wavelengths around 1650.9 nm. In September 2015, the instrument was deployed on NASA's DC-8 airborne laboratory and measured atmospheric methane over a wide range of topography and weather conditions from altitudes of 3 km to 13 km. In this paper, we will review the results from our flights, and identify areas of improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hentschke, Clemens M., E-mail: clemens.hentschke@gmail.com; Tönnies, Klaus D.; Beuing, Oliver
Purpose: The early detection of cerebral aneurysms plays a major role in preventing subarachnoid hemorrhage. The authors present a system to automatically detect cerebral aneurysms in multimodal 3D angiographic data sets. The authors’ system is parametrizable for contrast-enhanced magnetic resonance angiography (CE-MRA), time-of-flight magnetic resonance angiography (TOF-MRA), and computed tomography angiography (CTA). Methods: Initial volumes of interest are found by applying a multiscale sphere-enhancing filter. Several features are combined in a linear discriminant function (LDF) to distinguish between true aneurysms and false positives. The features include shape information, spatial information, and probability information. The LDF can either be parametrized bymore » domain experts or automatically by training. Vessel segmentation is avoided as it could heavily influence the detection algorithm. Results: The authors tested their method with 151 clinical angiographic data sets containing 112 aneurysms. The authors reach a sensitivity of 95% with CE-MRA data sets at an average false positive rate per data set (FP{sub DS}) of 8.2. For TOF-MRA, we achieve 95% sensitivity at 11.3 FP{sub DS}. For CTA, we reach a sensitivity of 95% at 22.8 FP{sub DS}. For all modalities, the expert parametrization led to similar or better results than the trained parametrization eliminating the need for training. 93% of aneurysms that were smaller than 5 mm were found. The authors also showed that their algorithm is capable of detecting aneurysms that were previously overlooked by radiologists. Conclusions: The authors present an automatic system to detect cerebral aneurysms in multimodal angiographic data sets. The system proved as a suitable computer-aided detection tool to help radiologists find cerebral aneurysms.« less
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
NASA Technical Reports Server (NTRS)
1973-01-01
Parametric studies and subsystem comparisons for the orbital radar mapping mission to planet Venus are presented. Launch vehicle requirements and primary orbiter propulsion system requirements are evaluated. The systems parametric analysis indicated that orbit size and orientation interrelated with almost all of the principal spacecraft systems and influenced significantly the definition of orbit insertion propulsion requirements, weight in orbit capability, radar system design, and mapping strategy.
Parametric Analysis and Safety Concepts of CWR Track Buckling.
DOT National Transportation Integrated Search
1993-12-01
The report presents a comprehensive study of continuous welded rail (CWR) track buckling strength as influenced by the range of all key parameters such as the lateral, torsional and longitudinal resistance, vehicle loads, etc. The parametric study pr...
First status report on regional ground-water flow modeling for the Paradox Basin, Utah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, R.W.
1984-05-01
Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less
Space Shuttle Main Engine (SSME) LOX turbopump pump-end bearing analysis
NASA Technical Reports Server (NTRS)
1986-01-01
A simulation of the shaft/bearing system of the Space Shuttle Main Engine Liquid Oxygen turbopump was developed. The simulation model allows the thermal and mechanical characteristics to interact as a realistic simulation of the bearing operating characteristics. The model accounts for single and two phase coolant conditions, and includes the heat generation from bearing friction and fluid stirring. Using the simulation model, parametric analyses were performed on the 45 mm pump-end bearings to investigate the sensitivity of bearing characteristics to contact friction, axial preload, coolant flow rate, coolant inlet temperature and quality, heat transfer coefficients, outer race clearance and misalignment, and the effects of thermally isolating the outer race from the isolator.
Fracture-Based Mesh Size Requirements for Matrix Cracks in Continuum Damage Mechanics Models
NASA Technical Reports Server (NTRS)
Leone, Frank A.; Davila, Carlos G.; Mabson, Gerald E.; Ramnath, Madhavadas; Hyder, Imran
2017-01-01
This paper evaluates the ability of progressive damage analysis (PDA) finite element (FE) models to predict transverse matrix cracks in unidirectional composites. The results of the analyses are compared to closed-form linear elastic fracture mechanics (LEFM) solutions. Matrix cracks in fiber-reinforced composite materials subjected to mode I and mode II loading are studied using continuum damage mechanics and zero-thickness cohesive zone modeling approaches. The FE models used in this study are built parametrically so as to investigate several model input variables and the limits associated with matching the upper-bound LEFM solutions. Specifically, the sensitivity of the PDA FE model results to changes in strength and element size are investigated.
NASA Astrophysics Data System (ADS)
Lee, J.; Bong, H. J.; Ha, J.; Choi, J.; Barlat, F.; Lee, M.-G.
2018-05-01
In this study, a numerical sensitivity analysis of the springback prediction was performed using advanced strain hardening models. In particular, the springback in U-draw bending for dual-phase 780 steel sheets was investigated while focusing on the effect of the initial yield stress determined from the cyclic loading tests. The anisotropic hardening models could reproduce the flow stress behavior under the non-proportional loading condition for the considered parametric cases. However, various identification schemes for determining the yield stress of the anisotropic hardening models significantly influenced the springback prediction. The deviations from the measured springback varied from 4% to 13.5% depending on the identification method.
NASA Technical Reports Server (NTRS)
Jones, E.; Anliker, M.; Chang, I.
1971-01-01
Investigation of the effects of blood viscosity on dissipation as well as dispersion of small waves in arteries and veins by means of a parametric study. A linearized analysis of axisymmetric waves in a cylindrical membrane that contains a viscous fluid indicates that there are two families of waves: a family of slow waves and one of fast waves. The faster waves are shown to be more sensitive to variations in the elastic properties of the medium surrounding the blood vessels and at high values of the frequency parameter alpha. At low values of alpha the effects of viscosity on attenuation are reversed.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Milenković, Jana; Hertl, Kristijana; Košir, Andrej; Zibert, Janez; Tasič, Jurij Franc
2013-06-01
The early detection of breast cancer is one of the most important predictors in determining the prognosis for women with malignant tumours. Dynamic contrast-enhanced magnetic-resonance imaging (DCE-MRI) is an important imaging modality for detecting and interpreting the different breast lesions from a time sequence of images and has proved to be a very sensitive modality for breast-cancer diagnosis. However, DCE-MRI exhibits only a moderate specificity, thus leading to a high rate of false positives, resulting in unnecessary biopsies that are stressful and physically painful for the patient and lead to an increase in the cost of treatment. There is a strong medical need for a DCE-MRI computer-aided diagnosis tool that would offer a reliable support to the physician's decision providing a high level of sensitivity and specificity. In our study we investigated the possibility of increasing differentiation between the malignant and the benign lesions with respect to the spatial variation of the temporal enhancements of three parametric maps, i.e., the initial enhancement (IE) map, the post-initial enhancement (PIE) map and the signal enhancement ratio (SER) map, by introducing additional methods along with the grey-level co-occurrence matrix, i.e., a second-order statistical method already applied for quantifying the spatiotemporal variations. We introduced the grey-level run-length matrix and the grey-level difference matrix, representing two additional, second-order statistical methods, and the circular Gabor as a frequency-domain-based method. Each of the additional methods is for the first time applied to the DCE-MRI data to differentiate between the malignant and the benign breast lesions. We applied the least-square minimum-distance classifier (LSMD), logistic regression and least-squares support vector machine (LS-SVM) classifiers on a total of 115 (78 malignant and 37 benign) breast DCE-MRI cases. The performances were evaluated using ten experiments of a ten-fold cross-validation. Our experimental analysis revealed the PIE map, together with the feature subset in which the discriminating ability of the co-occurrence features was increased by adding the newly introduced features, to be the most significant for differentiation between the malignant and the benign lesions. That diagnostic test - the aforementioned combination of parametric map and the feature subset achieved the sensitivity of 0.9193 which is statistically significantly higher compared to other diagnostic tests after ten-experiments of a ten-fold cross-validation and gave a statistically significantly higher specificity of 0.7819 for the fixed 95% sensitivity after the receiver operating characteristic (ROC) curve analysis. Combining the information from all the three parametric maps significantly increased the area under the ROC curve (AUC) of the aforementioned diagnostic test for the LSMD and logistic regression; however, not for the LS-SVM. The LSMD classifier yielded the highest area under the ROC curve when using the combined information, increasing the AUC from 0.9651 to 0.9755. Introducing new features to those of the grey-level co-occurrence matrix significantly increased the differentiation between the malignant and the benign breast lesions, thus resulting in a high sensitivity and improved specificity. Copyright © 2013 Elsevier B.V. All rights reserved.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary
1998-01-01
The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.
Contamination of grazing incidence EUV mirrors - An assessment
NASA Technical Reports Server (NTRS)
Osantowski, John F.; Fleetwood, C. F.
1988-01-01
Contamination assessment for space optical systems requires an understanding of the sensitivity of component performance, e.g. mirror reflectance, to materials deposited on the mirror surface. In a previous study, the sensitivity of typical normal incidence mirror coatings to surface deposits of generic hydrocarbons was reported. Recent activity in the development of grazing incidence telescopes for extreme ultraviolet space astronomy has stimulated the need for a similar assessment in the spectral region extending from approximately 100 A to 1000 A. The model used for analysis treats the contamination layer as a continuous thin film deposited on the mirror surface. The mirror surfaces selected for this study are opaque vacuum deposited gold and the uncoated and polished Zerodur. Scatter caused by film irregularities or particulates are not included in this assessment. Parametric evaluations at 100, 500, and 1000 A determine the sensitivity of mirror reflectance to a range of optical constants selected for the generic contaminants. This sensitivity analysis combined with the limited amount of optical data in the EUV for hydrocarbons, is used to select representative optical constants for the three wavelength regions. Reflectance versus contamination layer thickness curves are then calculated and used to determine critical thickness limits based on allowable reflectance change. Initial observations indicate that thickness limits will be highly dependent on the real part of the complex index of refraction of the contaminant film being less than 1.0. Preliminary laboratory measurements of samples contaminated with some commonly encountered hydrocarbons confirm trends indicated in the analytical studies.
Cieslak, Wendy; Pap, Kathleen; Bunch, Dustin R; Reineks, Edmunds; Jackson, Raymond; Steinle, Roxanne; Wang, Sihe
2013-02-01
Chromium (Cr), a trace metal element, is implicated in diabetes and cardiovascular disease. A hypochromic state has been associated with poor blood glucose control and unfavorable lipid metabolism. Sensitive and accurate measurement of blood chromium is very important to assess the chromium nutritional status. However, interferents in biological matrices and contamination make the sensitive analysis challenging. The primary goal of this study was to develop a highly sensitive method for quantification of total Cr in whole blood by inductively coupled plasma mass spectrometry (ICP-MS) and to validate the reference interval in a local healthy population. This method was developed on an ICP-MS with a collision/reaction cell. Interference was minimized using both kinetic energy discrimination between the quadrupole and hexapole and a selective collision gas (helium). Reference interval was validated in whole blood samples (n=51) collected in trace element free EDTA tubes from healthy adults (12 males, 39 females), aged 19-64 years (38.8±12.6), after a minimum of 8 h fasting. Blood samples were aliquoted into cryogenic vials and stored at -70 °C until analysis. The assay linearity was 3.42 to 1446.59 nmol/L with an accuracy of 87.7 to 99.8%. The high sensitivity was achieved by minimization of interference through selective kinetic energy discrimination and selective collision using helium. The reference interval for total Cr using a non-parametric method was verified to be 3.92 to 7.48 nmol/L. This validated ICP-MS methodology is highly sensitive and selective for measuring total Cr in whole blood. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved. Published by Elsevier Inc. All rights reserved.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
NASA Astrophysics Data System (ADS)
Ravanfar, Mohammadreza; Pfeiffer, Ferris M.; Bozynski, Chantelle C.; Wang, Yuanbo; Yao, Gang
2017-12-01
Collagen degeneration is an important pathological feature of osteoarthritis. The purpose of this study is to investigate whether the polarization-sensitive optical coherence tomography (PSOCT)-based optical polarization tractography (OPT) can be useful in imaging collagen structural changes in human osteoarthritic cartilage samples. OPT eliminated the banding artifacts in conventional PSOCT by calculating the depth-resolved local birefringence and fiber orientation. A close comparison between OPT and PSOCT showed that OPT provided improved visualization and characterization of the zonal structure in human cartilage. Experimental results obtained in this study also underlined the importance of knowing the collagen fiber orientation in conventional polarized light microscopy assessment. In addition, parametric OPT imaging was achieved by quantifying the surface roughness, birefringence, and fiber dispersion in the superficial zone of the cartilage. These quantitative parametric images provided complementary information on the structural changes in cartilage, which can be useful for a comprehensive evaluation of collagen damage in osteoarthritic cartilage.
NASA Astrophysics Data System (ADS)
Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.
2018-01-01
Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.
Sensitivity of a Wave Structure to Initial Conditions
NASA Technical Reports Server (NTRS)
Duval, Walter M. B.; Duval, Walter M. B. (Technical Monitor)
2000-01-01
Microgravity experiments aimed at quantifying effects of gentler via controlled sinusoidal forcing transmitted on the interface between two miscible liquids have shown the evolution of a quasi -stationary four-mode wave structure oriented vertically. The sensitivity of the wave structure to phase angle variation is investigated computationally. We show that a slight variation of the phase angle is sufficient to cause a bifurcation to a two-mode structure. The dependence of phase angle on wave structure is attributed to sensitivity on initial conditions due to the strong nonlinearity of the coupled field equations for the parametric space of interest.
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Parametric models of reflectance spectra for dyed fabrics
NASA Astrophysics Data System (ADS)
Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph
2016-05-01
This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Numerical prediction of 3-D ejector flows
NASA Technical Reports Server (NTRS)
Roberts, D. W.; Paynter, G. C.
1979-01-01
The use of parametric flow analysis, rather than parametric scale testing, to support the design of an ejector system offers a number of potential advantages. The application of available 3-D flow analyses to the design ejectors can be subdivided into several key elements. These are numerics, turbulence modeling, data handling and display, and testing in support of analysis development. Experimental and predicted jet exhaust for the Boeing 727 aircraft are examined.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
NASA Astrophysics Data System (ADS)
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Reusable Launch Vehicle Tank/Intertank Sizing Trade Study
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Myers, David E.; Martin, Carl J.
2000-01-01
A tank and intertank sizing tool that includes effects of major design drivers, and which allows parametric studies to be performed, has been developed and calibrated against independent representative results. Although additional design features, such as bulkheads and field joints, are not currently included in the process, the improved level of fidelity has allowed parametric studies to be performed which have resulted in understanding of key tank and intertank design drivers, design sensitivities, and definition of preferred design spaces. The sizing results demonstrated that there were many interactions between the configuration parameters of internal/external payload, vehicle fineness ratio (half body angle), fuel arrangement (LOX-forward/LOX-aft), number of tanks, and tank shape/arrangement (number of lobes).
Holt film wall shear instrumentation for boundary layer transition research
NASA Technical Reports Server (NTRS)
Schneider, Steven P.
1994-01-01
Measurements of the performance of hot-film wall-shear sensors were performed to aid development of improved sensors. The effect of film size and substrate properties on the sensor performance was quantified through parametric studies carried out both electronically and in a shock tube. The results show that sensor frequency response increases with decreasing sensor size, while at the same time sensitivity decreases. Substrate effects were also studied, through parametric variation of thermal conductivity and heat capacity. Early studies used complex dual-layer substrates, while later studies were designed for both single-layer and dual-layer substrates. Sensor failures and funding limitations have precluded completion of the substrate thermal-property tests.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Near-term hybrid vehicle program, phase 1. Appendix D: Sensitivity analysis resport
NASA Technical Reports Server (NTRS)
1979-01-01
Parametric analyses, using a hybrid vehicle synthesis and economics program (HYVELD) are described investigating the sensitivity of hybrid vehicle cost, fuel usage, utility, and marketability to changes in travel statistics, energy costs, vehicle lifetime and maintenance, owner use patterns, internal combustion engine (ICE) reference vehicle fuel economy, and drive-line component costs and type. The lowest initial cost of the hybrid vehicle would be $1200 to $1500 higher than that of the conventional vehicle. For nominal energy costs ($1.00/gal for gasoline and 4.2 cents/kWh for electricity), the ownership cost of the hybrid vehicle is projected to be 0.5 to 1.0 cents/mi less than the conventional ICE vehicle. To attain this ownership cost differential, the lifetime of the hybrid vehicle must be extended to 12 years and its maintenance cost reduced by 25 percent compared with the conventional vehicle. The ownership cost advantage of the hybrid vehicle increases rapidly as the price of fuel increases from $1 to $2/gal.
Chaotic map clustering algorithm for EEG analysis
NASA Astrophysics Data System (ADS)
Bellotti, R.; De Carlo, F.; Stramaglia, S.
2004-03-01
The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.
Nuclear Thermal Propulsion Mars Mission Systems Analysis and Requirements Definition
NASA Technical Reports Server (NTRS)
Mulqueen, Jack; Chiroux, Robert C.; Thomas, Dan; Crane, Tracie
2007-01-01
This paper describes the Mars transportation vehicle design concepts developed by the Marshall Space Flight Center (MSFC) Advanced Concepts Office. These vehicle design concepts provide an indication of the most demanding and least demanding potential requirements for nuclear thermal propulsion systems for human Mars exploration missions from years 2025 to 2035. Vehicle concept options vary from large "all-up" vehicle configurations that would transport all of the elements for a Mars mission on one vehicle. to "split" mission vehicle configurations that would consist of separate smaller vehicles that would transport cargo elements and human crew elements to Mars separately. Parametric trades and sensitivity studies show NTP stage and engine design options that provide the best balanced set of metrics based on safety, reliability, performance, cost and mission objectives. Trade studies include the sensitivity of vehicle performance to nuclear engine characteristics such as thrust, specific impulse and nuclear reactor type. Tbe associated system requirements are aligned with the NASA Exploration Systems Mission Directorate (ESMD) Reference Mars mission as described in the Explorations Systems Architecture Study (ESAS) report. The focused trade studies include a detailed analysis of nuclear engine radiation shield requirements for human missions and analysis of nuclear thermal engine design options for the ESAS reference mission.
Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung
2014-10-01
Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.
An appraisal of statistical procedures used in derivation of reference intervals.
Ichihara, Kiyoshi; Boyd, James C
2010-11-01
When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.
Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J
2013-08-01
Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
Yadage and Packtivity - analysis preservation using parametrized workflows
NASA Astrophysics Data System (ADS)
Cranmer, Kyle; Heinrich, Lukas
2017-10-01
Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
Vitikainen, Kirsi; Street, Andrew; Linna, Miika
2009-02-01
Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Parametric analysis of synthetic aperture radar data acquired over truck garden vegetation
NASA Technical Reports Server (NTRS)
Wu, S. T.
1984-01-01
An airborne X-band SAR acquired multipolarization and multiflight pass SAR images over a truck garden vegetation area. Based on a variety of land cover and row crop direction variations, the vertical (VV) polarization data contain the highest contrast, while cross polarization contains the least. When the radar flight path is parallel to the row direction, both horizontal (HH) and VV polarization data contain very high return which masks out the specific land cover that forms the row structure. Cross polarization data are not that sensitive to row orientation. The inclusion of like and cross polarization data help delineate special surface features (e.g., row crop against non-row-oriented land cover, very-rough-surface against highly row-oriented surface).
Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Bielak, J.
2008-12-01
I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.
Theoretical studies of system performance and adaptive optics design parameters
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-08-01
The ultimate performance of an adaptive optics (AO) system can be sensitive to specific design parameters of individual components. The type and configuration of a wavefront sensor or the shape of individual deformable mirror actuator influence functions can have a profound effect on the correctability of the AO system. This paper will discuss the results of a theoretical study which employed both closed form analytic solutions and computer models. A parametric analysis of wavefront sensor characteristics, noise, and subaperture geometry are independently evaluated against system response to an aberrated wave characteristic of atmospheric turbulence. Similarly, the shape and extent of the deformable mirror influence function and the placement and number of actuators is evaluated to characterize the effects of fitting error and coupling.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
Radiation effect on rocket engine performance
NASA Technical Reports Server (NTRS)
Chiu, Huei-Huang; Kross, K. W.; Krebsbach, A. N.
1990-01-01
Critical problem areas involving the effect of radiation on the combustion of bipropellants are addressed by formulating a universal scaling law in combination with a radiation-enhanced vaporization combustion model. Numerical algorithms are developed and data pertaining to the Variable Thrust Engine (VTE) and the Space Shuttle Main Engine (SSME) are used to conduct parametric sensitivity studies to predict the principal intercoupling effects of radiation. The analysis reveals that low-enthalpy engines, such as the VTE, are vulnerable to a substantial performance setback due to radiative loss, whereas the performance of high-enthalpy engines such as the SSME are hardly affected over a broad range of engine operation. Combustion enhancement by radiative heating of the propellant has a significant impact on propellants with high absorptivity.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
Assessment of Dimensionality in Social Science Subtest
ERIC Educational Resources Information Center
Ozbek Bastug, Ozlem Yesim
2012-01-01
Most of the literature on dimensionality focused on either comparison of parametric and nonparametric dimensionality detection procedures or showing the effectiveness of one type of procedure. There is no known study to shown how to do combined parametric and nonparametric dimensionality analysis on real data. The current study is aimed to fill…
Sengupta Chattopadhyay, Amrita; Hsiao, Ching-Lin; Chang, Chien Ching; Lian, Ie-Bin; Fann, Cathy S J
2014-01-01
Identifying susceptibility genes that influence complex diseases is extremely difficult because loci often influence the disease state through genetic interactions. Numerous approaches to detect disease-associated SNP-SNP interactions have been developed, but none consistently generates high-quality results under different disease scenarios. Using summarizing techniques to combine a number of existing methods may provide a solution to this problem. Here we used three popular non-parametric methods-Gini, absolute probability difference (APD), and entropy-to develop two novel summary scores, namely principle component score (PCS) and Z-sum score (ZSS), with which to predict disease-associated genetic interactions. We used a simulation study to compare performance of the non-parametric scores, the summary scores, the scaled-sum score (SSS; used in polymorphism interaction analysis (PIA)), and the multifactor dimensionality reduction (MDR). The non-parametric methods achieved high power, but no non-parametric method outperformed all others under a variety of epistatic scenarios. PCS and ZSS, however, outperformed MDR. PCS, ZSS and SSS displayed controlled type-I-errors (<0.05) compared to GS, APDS, ES (>0.05). A real data study using the genetic-analysis-workshop 16 (GAW 16) rheumatoid arthritis dataset identified a number of interesting SNP-SNP interactions. © 2013 Elsevier B.V. All rights reserved.
Extended analytical solutions for effective elastic moduli of cracked porous media
NASA Astrophysics Data System (ADS)
Nguyen, Sy-Tuan; To, Quy Dong; Vu, Minh Ngoc
2017-05-01
Extended solutions are derived, on the basis of the micromechanical methods, for the effective elastic moduli of porous media containing stiff pores and both open and closed cracks. Analytical formulas of the overall bulk and shear moduli are obtained as functions of the elastic moduli of the solid skeleton, porosity and the densities of open and closed cracks families. We show that the obtained results are extensions of the classical widely used Walsh's (JGR, 1965) and Budiansky-O‧Connell's (JGR, 1974) solutions. Parametric sensitivity analysis clarifies the impact of the model parameters on the effective elastic properties. An inverse analysis, using sonic and density data, is considered to quantify the density of both open and closed cracks. It is observed that the density of closed cracks depends strongly on stress condition while the dependence of open cracks on the confining stress is negligible.
NASA Astrophysics Data System (ADS)
Desa, M. S. M.; Ibrahim, M. H. W.; Shahidan, S.; Ghadzali, N. S.; Misri, Z.
2018-04-01
Acoustic emission (AE) technique is one of the non-destructive (NDT) testing, where it can be used to determine the damage of concrete structures such as crack, corrosion, stability, sensitivity, as structure monitoring and energy formed within cracking opening growth in the concrete structure. This article gives a comprehensive review of the acoustic emission (AE) technique testing due to its application in concrete structure for structural health monitoring (SHM). Assessment of AE technique used for structural are reviewed to give the perception of its structural engineering such as dam, bridge and building, where the previous research has been reviewed based on AE application. The assessment of AE technique focusing on basic fundamental of parametric and signal waveform analysis during analysis process and its capability in structural monitoring. Moreover, the assessment and application of AE due to its function have been summarized and highlighted for future references
Credit scoring analysis using kernel discriminant
NASA Astrophysics Data System (ADS)
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Rasch Analysis of Scientific Literacy in an Astronomical Citizen Science Project
NASA Astrophysics Data System (ADS)
Price, A.
2012-06-01
(Abstract only) We investigate change in attitudes towards science and belief in the nature of science by participants in a citizen science project about astronomy. A pre-test was given to 1,385 participants and a post-test was given six months later to 165 participants. Nine participants were interviewed. Responses were analyzed using the Rasch Rating Scale Model to place Likert data on an interval scale allowing for more sensitive parametric analysis. Results show that overall attitudes did not change, p = .225. However, there was significant change towards attitudes relating to science news (positive) and scientific self efficacy (negative), p = .001 and p = .035, respectively. This change was related to social activity in the project. Beliefs in the nature of science exhibited a small but significant increase, p = .04. Relative positioning of scores on the belief items suggests the increase is mostly due to reinforcement of current beliefs.
Rasch Analysis of Scientific Literacy in an Astronomical Citizen Science Project
NASA Astrophysics Data System (ADS)
Price, Aaron
2011-05-01
We investigate change in attitudes towards science and belief in the nature of science by participants in a citizen science project about astronomy. A pre-test was given to 1,385 participants and a post-test was given six months later to 165 participants. Nine participants were interviewed. Responses were analyzed using the Rasch Rating Scale Model to place Likert data on an interval scale allowing for more sensitive parametric analysis. Results show that overall attitudes did not change, p = .225. However, there was significant change towards attitudes relating to science news (positive) and scientific self efficacy (negative), p < .001 and p = .035 respectively. This change was related to social activity in the project. Beliefs in the nature of science exhibited a small, but significant increase, p = .04. Relative positioning of scores on the belief items suggests the increase is mostly due to reinforcement of current beliefs.
First results from a microwave cavity axion search at 25 μeV : Analysis
NASA Astrophysics Data System (ADS)
Zhong, Ling; ADMX-HF Collaboration
2017-01-01
ADMX-HF searches for dark matter axions via Primakoff conversion into microwave photons in the gigahertz domain. Since 2012, tremendous effort has been made to build an axion detector working in this frequency range. By operating the system in a cryogen-free dilution refrigerator (T = 127 mK) and integrating a Josephson Parametric Amplifier (JPA), we obtain a sufficiently low system noise temperature to exclude axion models with gaγγ > 2 ×10-14GeV-1 over the mass range 23 . 55 μeV
Parametrically excited helicopter ground resonance dynamics with high blade asymmetries
NASA Astrophysics Data System (ADS)
Sanches, L.; Michon, G.; Berlioz, A.; Alazard, D.
2012-07-01
The present work is aimed at verifying the influence of high asymmetries in the variation of in-plane lead-lag stiffness of one blade on the ground resonance phenomenon in helicopters. The periodical equations of motions are analyzed by using Floquet's Theory (FM) and the boundaries of instabilities predicted. The stability chart obtained as a function of asymmetry parameters and rotor speed reveals a complex evolution of critical zones and the existence of bifurcation points at low rotor speed values. Additionally, it is known that when treated as parametric excitations; periodic terms may cause parametric resonances in dynamic systems, some of which can become unstable. Therefore, the helicopter is later considered as a parametrically excited system and the equations are treated analytically by applying the Method of Multiple Scales (MMS). A stability analysis is used to verify the existence of unstable parametric resonances with first and second-order sets of equations. The results are compared and validated with those obtained by Floquet's Theory. Moreover, an explanation is given for the presence of unstable motion at low rotor speeds due to parametric instabilities of the second order.
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Likert scales, levels of measurement and the "laws" of statistics.
Norman, Geoff
2010-12-01
Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".
Chawes, Bo; Nilsson, Erik; Nørgaard, Sarah; Dossing, Anna; Mortensen, Li; Bisgaard, Hans
2017-08-01
Pharmacodynamic assessment of the systemic effect of inhaled corticosteroids (ICSs) is often done by measuring 24-hour urine free cortisol (UFC) excretion. Knemometry assessing short-term lower-leg growth rate (LLGR) is a more rarely used alternative. The primary aim of this study was to compare the sensitivity of LLGR and 24-hour UFC excretion for evaluating systemic exposure to ICSs in prepubertal children with asthma. The secondary aim was to evaluate factors influencing the precision of LLGR calculated by the traditional 1 leg nonparametric method versus a new 2 leg parametric method. The study evaluated 60 children with mild asthma aged 5 to 12 years participating in a randomized controlled trial of ICSs with longitudinal concomitant assessments of LLGR and 24-hour UFC excretion. The sensitivity of the safety assessments was analyzed by comparing LLGR and 24-hour UFC in the placebo run-in period with values in the ICS treatment period by using paired t tests. Factors with a potential influence on LLGR were analyzed by means of ANOVA and the Levene test of homogeneity. The mean LLGR was significantly reduced during the ICS versus placebo run-in periods: 0.18 mm/wk (SD, 0.55 mm/wk) versus 0.45 mm/wk (SD, 0.39 mm/wk), with a mean difference of 0.27 mm/wk (95% CI, 0.05-0.48 mm/wk; P = .02). In contrast, there was no difference in 24-hour UFC excretion: 6.91 nmol/mmol (SD, 4.67 nmol/mmol) versus 7.58 nmol/mmol (SD, 6.17 nmol/mmol), with a mean difference of 0.67 nmol/mmol (95% CI, -1.13 to 2.48 nmol/mmol; P = .46). We observed no significant difference in parametric determined LLGR caused by the child's age or sex, investigator, or season of measurement, whereas some differences were observed for the nonparametric LLGR. These findings suggest that knemometry is a more sensitive pharmacodynamic measure of systemic effects of ICSs than 24-hour UFC excretion and that a parametric determination of LLGR increases the sensitivity of the method. These findings should be considered by legislative authorities in the future. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Salmani, Majid; Büskens, Christof
2011-11-01
In this article, after describing a procedure to construct trajectories for a spacecraft in the four-body model, a method to correct the trajectory violations is presented. To construct the trajectories, periodic orbits as the solutions of the three-body problem are used. On the other hand, the bicircular model based on the Sun-Earth rotating frame governs the dynamics of the spacecraft and other bodies. A periodic orbit around the first libration-point L1 is the destination of the mission which is one of the equilibrium points in the Sun-Earth/Moon three-body problem. In the way to reach such a far destination, there are a lot of disturbances such as solar radiation and winds that make the plans untrustworthy. However, the solar radiation pressure is considered in the system dynamics. To prevail over these difficulties, considering the whole transfer problem as an optimal control problem makes the designer to be able to correct the unavoidable violations from the pre-designed trajectory and strategies. The optimal control problem is solved by a direct method, transcribing it into a nonlinear programming problem. This transcription gives an unperturbed optimal trajectory and its sensitivities with respect perturbations. Modeling these perturbations as parameters embedded in a parametric optimal control problem, one can take advantage of the parametric sensitivity analysis of nonlinear programming problem to recalculate the optimal trajectory with a very smaller amount of computation costs. This is obtained by evaluating a first-order Taylor expansion of the perturbed solution in an iterative process which is aimed to achieve an admissible solution. At the end, the numerical results show the applicability of the presented method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Huiping; Qian, Yun; Zhao, Chun
2015-09-09
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. Themore » relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.« less
Application of artificial neural network to fMRI regression analysis.
Misaki, Masaya; Miyauchi, Satoru
2006-01-15
We used an artificial neural network (ANN) to detect correlations between event sequences and fMRI (functional magnetic resonance imaging) signals. The layered feed-forward neural network, given a series of events as inputs and the fMRI signal as a supervised signal, performed a non-linear regression analysis. This type of ANN is capable of approximating any continuous function, and thus this analysis method can detect any fMRI signals that correlated with corresponding events. Because of the flexible nature of ANNs, fitting to autocorrelation noise is a problem in fMRI analyses. We avoided this problem by using cross-validation and an early stopping procedure. The results showed that the ANN could detect various responses with different time courses. The simulation analysis also indicated an additional advantage of ANN over non-parametric methods in detecting parametrically modulated responses, i.e., it can detect various types of parametric modulations without a priori assumptions. The ANN regression analysis is therefore beneficial for exploratory fMRI analyses in detecting continuous changes in responses modulated by changes in input values.
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Power flow analysis of two coupled plates with arbitrary characteristics
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The limitation of keeping two plates identical is removed and the vibrational power input and output are evaluated for different area ratios, plate thickness ratios, and for different values of the structural damping loss factor for the source plate (plate with excitation) and the receiver plate. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to be able to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. As was done previously, results obtained from the mobility power flow approach will be compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between SEA results and the mobility power flow results. Furthermore, the benefits that can be derived from using the mobility power flow approach, are also examined.
Ilan, Ezgi; Sandström, Mattias; Velikyan, Irina; Sundin, Anders; Eriksson, Barbro; Lubberink, Mark
2017-05-01
68 Ga-DOTATOC and 68 Ga-DOTATATE are radiolabeled somatostatin analogs used for the diagnosis of somatostatin receptor-expressing neuroendocrine tumors (NETs), and SUV measurements are suggested for treatment monitoring. However, changes in net influx rate ( K i ) may better reflect treatment effects than those of the SUV, and accordingly there is a need to compute parametric images showing K i at the voxel level. The aim of this study was to evaluate parametric methods for computation of parametric K i images by comparison to volume of interest (VOI)-based methods and to assess image contrast in terms of tumor-to-liver ratio. Methods: Ten patients with metastatic NETs underwent a 45-min dynamic PET examination followed by whole-body PET/CT at 1 h after injection of 68 Ga-DOTATOC and 68 Ga-DOTATATE on consecutive days. Parametric K i images were computed using a basis function method (BFM) implementation of the 2-tissue-irreversible-compartment model and the Patlak method using a descending aorta image-derived input function, and mean tumor K i values were determined for 50% isocontour VOIs and compared with K i values based on nonlinear regression (NLR) of the whole-VOI time-activity curve. A subsample of healthy liver was delineated in the whole-body and K i images, and tumor-to-liver ratios were calculated to evaluate image contrast. Correlation ( R 2 ) and agreement between VOI-based and parametric K i values were assessed using regression and Bland-Altman analysis. Results: The R 2 between NLR-based and parametric image-based (BFM) tumor K i values was 0.98 (slope, 0.81) and 0.97 (slope, 0.88) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. For Patlak analysis, the R 2 between NLR-based and parametric-based (Patlak) tumor K i was 0.95 (slope, 0.71) and 0.92 (slope, 0.74) for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. There was no bias between NLR and parametric-based K i values. Tumor-to-liver contrast was 1.6 and 2.0 times higher in the parametric BFM K i images and 2.3 and 3.0 times in the Patlak images than in the whole-body images for 68 Ga-DOTATOC and 68 Ga-DOTATATE, respectively. Conclusion: A high R 2 and agreement between NLR- and parametric-based K i values was found, showing that K i images are quantitatively accurate. In addition, tumor-to-liver contrast was superior in the parametric K i images compared with whole-body images for both 68 Ga-DOTATOC and 68 Ga DOTATATE. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Optimal design of high-rise buildings with respect to fundamental eigenfrequency
NASA Astrophysics Data System (ADS)
Alavi, Arsalan; Rahgozar, Reza; Torkzadeh, Peyman; Hajabasi, Mohamad Ali
2017-12-01
In modern tall and slender structures, dynamic responses are usually the dominant design requirements, instead of strength criteria. Resonance is often a threatening phenomenon for such structures. To avoid this problem, the fundamental eigenfrequency, an eigenfrequency of higher order, should be maximized. An optimization problem with this objective is constructed in this paper and is applied to a high-rise building. Using variational method, the objective function is maximized, contributing to a particular profile for the first mode shape. Based on this preselected profile, a parametric formulation for flexural stiffness is calculated. Due to some near-zero values for stiffness, the obtained formulation will be modified by adding a lower bound constraint. To handle this constraint some new parameters are introduced; thereby allowing for construction of a model relating the unknown parameters. Based on this mathematical model, a design algorithmic procedure is presented. For the sake of convenience, a single-input design graph is presented as well. The main merit of the proposed method, compared to previous researches, is its hand calculation aspect, suitable for parametric studies and sensitivity analysis. As the presented formulations are dimensionless, they are applicable in any dimensional system. Accuracy and practicality of the proposed method is illustrated at the end by applying it to a real-life structure.
Zeng, Li-ping; Hu, Zheng-mao; Mu, Li-li; Mei, Gui-sen; Lu, Xiu-ling; Zheng, Yong-jun; Li, Pei-jian; Zhang, Ying-xue; Pan, Qian; Long, Zhi-gao; Dai, He-ping; Zhang, Zhuo-hua; Xia, Jia-hui; Zhao, Jing-ping; Xia, Kun
2011-06-01
To investigate the relationship of susceptibility loci in chromosomes 1q21-25 and 6p21-25 and schizophrenia subtypes in Chinese population. A genomic scan and parametric and non-parametric analyses were performed on 242 individuals from 36 schizophrenia pedigrees, including 19 paranoid schizophrenia and 17 undifferentiated schizophrenia pedigrees, from Henan province of China using 5 microsatellite markers in the chromosome region 1q21-25 and 8 microsatellite markers in the chromosome region 6p21-25, which were the candidates of previous studies. All affected subjects were diagnosed and typed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revised (DSM-IV-TR; American Psychiatric Association, 2000). All subjects signed informed consent. In chromosome 1, parametric analysis under the dominant inheritance mode of all 36 pedigrees showed that the maximum multi-point heterogeneity Log of odds score method (HLOD) score was 1.33 (α = 0.38). The non-parametric analysis and the single point and multi-point nonparametric linkage (NPL) scores suggested linkage at D1S484, D1S2878, and D1S196. In the 19 paranoid schizophrenias pedigrees, linkage was not observed for any of the 5 markers. In the 17 undifferentiated schizophrenia pedigrees, the multi-point NPL score was 1.60 (P= 0.0367) at D1S484. The single point NPL score was 1.95(P= 0.0145) and the multi-point NPL score was 2.39 (P= 0.0041) at D1S2878. Additionally, the multi-point NPL score was 1.74 (P= 0.0255) at D1S196. These same three loci showed suggestive linkage during the integrative analysis of all 36 pedigrees. In chromosome 6, parametric linkage analysis under the dominant and recessive inheritance and the non-parametric linkage analysis of all 36 pedigrees and the 17 undifferentiated schizophrenia pedigrees, linkage was not observed for any of the 8 markers. In the 19 paranoid schizophrenias pedigrees, parametric analysis showed that under recessive inheritance mode the maximum single-point HLOD score was 1.26 (α = 0.40) and the multi-point HLOD was 1.12 (α = 0.38) at D6S289 in the chromosome 6p23. In nonparametric analysis, the single-point NPL score was 1.52 (P= 0.0402) and the multi-point NPL score was 1.92 (P= 0.0206) at D6S289. Susceptibility genes correlated with undifferentiated schizophrenia pedigrees from D1S484, D1S2878, D1S196 loci, and those correlated with paranoid schizophrenia pedigrees from D6S289 locus are likely present in chromosome regions 1q23.3 and 1q24.2, and chromosome region 6p23, respectively.
Progress on automated data analysis algorithms for ultrasonic inspection of composites
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2015-03-01
Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.
Torsional vibration of a cracked rod by variational formulation and numerical analysis
NASA Astrophysics Data System (ADS)
Chondros, T. G.; Labeas, G. N.
2007-04-01
The torsional vibration of a circumferentially cracked cylindrical shaft is studied through an "exact" analytical solution and a numerical finite element (FE) analysis. The Hu-Washizu-Barr variational formulation is used to develop the differential equation and the boundary conditions of the cracked rod. The equations of motion for a uniform cracked rod in torsional vibration are derived and solved, and the Rayleigh quotient is used to further approximate the natural frequencies of the cracked rod. Results for the problem of the torsional vibration of a cylindrical shaft with a peripheral crack are provided through an analytical solution based on variational formulation to derive the equation of motion and a numerical analysis utilizing a parametric three-dimensional (3D) solid FE model of the cracked rod. The crack is modelled as a continuous flexibility based on fracture mechanics principles. The variational formulation results are compared with the FE alternative. The sensitivity of the FE discretization with respect to the analytical results is assessed.
Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia
A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.
Revisiting dark energy models using differential ages of galaxies
NASA Astrophysics Data System (ADS)
Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Biesiada, Marek
2017-03-01
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H0dt/dz and its derivative (H0d2t/dz2) using a model independent technique called non-parametric smoothing. Basically, dt/dz is the change in the age of the object as a function of redshift which is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H(z) data. Further, we calculate H0dt/dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H0dt/dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H0dt/dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z>= 0.3.
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.
1983-01-01
Modeling in the development of low stress configurations for wide web growth is presented. Parametric sensitivity to identify design features which can be used for dynamic trimming of the furnace element was studied. Temperature measurements of experimental growth behavior led to modification in the growth system to improve lateral temperature distributions.
A parametric study of fracture toughness of fibrous composite materials
NASA Technical Reports Server (NTRS)
Poe, C. C., Jr.
1987-01-01
Impacts to fibrous composite laminates by objects with low velocities can break fibers giving crack-like damage. The damage may not extend completely through a thick laminate. The tension strength of these damage laminates is reduced much like that of cracked metals. The fracture toughness depends on fiber and matrix properties, fiber orientations, and stacking sequence. Accordingly, a parametric study was made to determine how fiber and matrix properties and fiber orientations affect fracture toughness and notch sensitivity. The values of fracture toughness were predicted from the elastic constants of the laminate and the failing strain of the fibers using a general fracture toughness parameter developed previously. For a variety of laminates, values of fracture toughness from tests of center-cracked specimens and values of residual strength from tests of thick laminates with surface cracks were compared to the predictions to give credibility to the study. In contrast to the usual behavior of metals, it is shown that both ultimate tensile strength and fracture toughness of composites can be increased without increasing notch sensitivity.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Peripheral refractive correction and automated perimetric profiles.
Wild, J M; Wood, J M; Crews, S J
1988-06-01
The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.
A Novel Phase Sensitive Quantum Well Nanostructure Scheme for Controlling Optical Bistability
NASA Astrophysics Data System (ADS)
Raheli, Ali
2018-04-01
A novel four-level lambda-type quantum well (QW) nanostructure is proposed based on phase sensitive optical bistability (OB) and multistability (OM) with a closed-loop configuration. The influence of controlling parameters of the system on OB and OM is investigated. In particular, it is found that the OB behavior is strongly sensitive to the relative phase of applied fields. It is also shown that under certain parametric conditions, the OB can be switched to OM or vice versa. The controllability of OB/OM in such a QW nanostructure may bring some new possibilities for technological applications in solid-state quantum information science and optoelectronics.
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging. PMID:28106118
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
NASA Astrophysics Data System (ADS)
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
NASA Astrophysics Data System (ADS)
Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2017-03-01
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marc Cremer; Kirsi St. Marie; Dave Wang
2003-04-30
This is the first Semiannual Technical Report for DOE Cooperative Agreement No: DE-FC26-02NT41580. The goal of this project is to systematically assess the sensitivity of furnace operational conditions to burner air and fuel flows in coal fired utility boilers. Our approach is to utilize existing baseline furnace models that have been constructed using Reaction Engineering International's (REI) computational fluid dynamics (CFD) software. Using CFD analyses provides the ability to carry out a carefully controlled virtual experiment to characterize the sensitivity of NOx emissions, unburned carbon (UBC), furnace exit CO (FECO), furnace exit temperature (FEGT), and waterwall deposition to burner flowmore » controls. The Electric Power Research Institute (EPRI) is providing co-funding for this program, and instrument and controls experts from EPRI's Instrument and Controls (I&C) Center are active participants in this project. This program contains multiple tasks and good progress is being made on all fronts. A project kickoff meeting was held in conjunction with NETL's 2002 Sensors and Control Program Portfolio Review and Roadmapping Workshop, in Pittsburgh, PA during October 15-16, 2002. Dr. Marc Cremer, REI, and Dr. Paul Wolff, EPRI I&C, both attended and met with the project COR, Susan Maley. Following the review of REI's database of wall-fired coal units, the project team selected a front wall fired 150 MW unit with a Riley Low NOx firing system including overfire air for evaluation. In addition, a test matrix outlining approximately 25 simulations involving variations in burner secondary air flows, and coal and primary air flows was constructed. During the reporting period, twenty-two simulations have been completed, summarized, and tabulated for sensitivity analysis. Based on these results, the team is developing a suitable approach for quantifying the sensitivity coefficients associated with the parametric tests. Some of the results of the CFD simulations of the single wall fired unit were presented in a technical paper entitled, ''CFD Investigation of the Sensitivity of Furnace Operational Conditions to Burner Flow Controls,'' presented at the 28th International Technical Conference on Coal Utilization and Fuel Systems in Clearwater, FL March 9-14, 2003. In addition to the work completed on the single wall fired unit, the project team made the selection of a 580 MW opposed wall fired unit to be the subject of evaluation in this program. Work is in progress to update the baseline model of this unit so that the parametric simulations can be initiated.« less
NASA Astrophysics Data System (ADS)
Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.
2018-04-01
There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.
A parametric ribcage geometry model accounting for variations among the adult population.
Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen
2016-09-06
The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Ruiz-Sanchez, Eduardo
2015-12-01
The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
A Lunar Surface Operations Simulator
NASA Technical Reports Server (NTRS)
Nayar, H.; Balaram, J.; Cameron, J.; Jain, A.; Lim, C.; Mukherjee, R.; Peters, S.; Pomerantz, M.; Reder, L.; Shakkottai, P.;
2008-01-01
The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.
Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain
Park, Soojin; Konkle, Talia; Oliva, Aude
2015-01-01
Estimating the size of a space and its degree of clutter are effortless and ubiquitous tasks of moving agents in a natural environment. Here, we examine how regions along the occipital–temporal lobe respond to pictures of indoor real-world scenes that parametrically vary in their physical “size” (the spatial extent of a space bounded by walls) and functional “clutter” (the organization and quantity of objects that fill up the space). Using a linear regression model on multivoxel pattern activity across regions of interest, we find evidence that both properties of size and clutter are represented in the patterns of parahippocampal cortex, while the retrosplenial cortex activity patterns are predominantly sensitive to the size of a space, rather than the degree of clutter. Parametric whole-brain analyses confirmed these results. Importantly, this size and clutter information was represented in a way that generalized across different semantic categories. These data provide support for a property-based representation of spaces, distributed across multiple scene-selective regions of the cerebral cortex. PMID:24436318
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wade, A. R.; Mansell, G. L.; McRae, T. G., E-mail: Terry.Mcrae@anu.edu.au
With the recent detection of gravitational waves, non-classical light sources are likely to become an essential element of future detectors engaged in gravitational wave astronomy and cosmology. Operating a squeezed light source under high vacuum has the advantages of reducing optical losses and phase noise compared to techniques where the squeezed light is introduced from outside the vacuum. This will ultimately provide enhanced sensitivity for modern interferometric gravitational wave detectors that will soon become limited by quantum noise across much of the detection bandwidth. Here we describe the optomechanical design choices and construction techniques of a near monolithic glass opticalmore » parametric oscillator that has been operated under a vacuum of 10{sup −6} mbar. The optical parametric oscillator described here has been shown to produce 8.6 dB of quadrature squeezed light in the audio frequency band down to 10 Hz. This performance has been maintained for periods of around an hour and the system has been under vacuum continuously for several months without a degradation of this performance.« less
Quantum tomography enhanced through parametric amplification
NASA Astrophysics Data System (ADS)
Knyazev, E.; Spasibko, K. Yu; Chekhova, M. V.; Khalili, F. Ya
2018-01-01
Quantum tomography is the standard method of reconstructing the Wigner function of quantum states of light by means of balanced homodyne detection. The reconstruction quality strongly depends on the photodetectors quantum efficiency and other losses in the measurement setup. In this article we analyze in detail a protocol of enhanced quantum tomography, proposed by Leonhardt and Paul [1] which allows one to reduce the degrading effect of detection losses. It is based on phase-sensitive parametric amplification, with the phase of the amplified quadrature being scanned synchronously with the local oscillator phase. Although with sufficiently strong amplification the protocol enables overcoming any detection inefficiency, it was so far not implemented in the experiment, probably due to the losses in the amplifier. Here we discuss a possible proof-of-principle experiment with a traveling-wave parametric amplifier. We show that with the state-of-the-art optical elements, the protocol enables high fidelity tomographic reconstruction of bright non-classical states of light. We consider two examples: bright squeezed vacuum and squeezed single-photon state, with the latter being a non-Gaussian state and both strongly affected by the losses.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
NASA Astrophysics Data System (ADS)
Wade, A. R.; Mansell, G. L.; McRae, T. G.; Chua, S. S. Y.; Yap, M. J.; Ward, R. L.; Slagmolen, B. J. J.; Shaddock, D. A.; McClelland, D. E.
2016-06-01
With the recent detection of gravitational waves, non-classical light sources are likely to become an essential element of future detectors engaged in gravitational wave astronomy and cosmology. Operating a squeezed light source under high vacuum has the advantages of reducing optical losses and phase noise compared to techniques where the squeezed light is introduced from outside the vacuum. This will ultimately provide enhanced sensitivity for modern interferometric gravitational wave detectors that will soon become limited by quantum noise across much of the detection bandwidth. Here we describe the optomechanical design choices and construction techniques of a near monolithic glass optical parametric oscillator that has been operated under a vacuum of 10-6 mbar. The optical parametric oscillator described here has been shown to produce 8.6 dB of quadrature squeezed light in the audio frequency band down to 10 Hz. This performance has been maintained for periods of around an hour and the system has been under vacuum continuously for several months without a degradation of this performance.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Wade, A R; Mansell, G L; McRae, T G; Chua, S S Y; Yap, M J; Ward, R L; Slagmolen, B J J; Shaddock, D A; McClelland, D E
2016-06-01
With the recent detection of gravitational waves, non-classical light sources are likely to become an essential element of future detectors engaged in gravitational wave astronomy and cosmology. Operating a squeezed light source under high vacuum has the advantages of reducing optical losses and phase noise compared to techniques where the squeezed light is introduced from outside the vacuum. This will ultimately provide enhanced sensitivity for modern interferometric gravitational wave detectors that will soon become limited by quantum noise across much of the detection bandwidth. Here we describe the optomechanical design choices and construction techniques of a near monolithic glass optical parametric oscillator that has been operated under a vacuum of 10(-6) mbar. The optical parametric oscillator described here has been shown to produce 8.6 dB of quadrature squeezed light in the audio frequency band down to 10 Hz. This performance has been maintained for periods of around an hour and the system has been under vacuum continuously for several months without a degradation of this performance.
Validity, sensitivity and specificity of the mentation, behavior and mood subscale of the UPDRS.
Holroyd, Suzanne; Currie, Lillian J; Wooten, G Frederick
2008-06-01
The unified Parkinson's disease rating scale (UPDRS) is the most widely used tool to rate the severity and the stage of Parkinson's disease (PD). However, the mentation, behavior and mood (MBM) subscale of the UPDRS has received little investigation regarding its validity and sensitivity. Three items of this subscale were compared to criterion tests to examine validity, sensitivity and specificity. Ninety-seven patients with idiopathic PD were assessed on the UPDRS. Scores on three items of the MBM subscale, intellectual impairment, thought disorder and depression, were compared to criterion tests, the telephone interview for cognition status (TICS), psychiatric assessment for psychosis and the geriatric depression scale (GDS). Non-parametric tests of association were performed to examine concurrent validity of the MBM items. The sensitivities, specificities and optimal cutoff scores for each MBM item were estimated by receiver operating characteristic (ROC) curve analysis. The MBM items demonstrated low to moderate correlation with the criterion tests, and the sensitivity and specificity were not strong. Even using a score of 7.0 on the items of the MBM demonstrated a sensitivity/specificity of only 0.19/0.48 for intellectual impairment, 0.60/0.72 for thought disorder and 0.61/0.87 for depression. Using a more appropriate cutoff of 2.0 revealed sensitivities of 0.01, 0.38 and 0.13 respectively. The MBM subscale items of intellectual impairment, thought disorder and depression are not appropriate for screening or diagnostic purposes. Tools such as the TICS and the GDS should be considered instead.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino
2017-03-01
Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.
Cosmological constraints on interacting light particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brust, Christopher; Cui, Yanou; Sigurdson, Kris, E-mail: cbrust@perimeterinstitute.ca, E-mail: yanou.cui@ucr.edu, E-mail: krs@phas.ubc.ca
2017-08-01
Cosmological observations are becoming increasingly sensitive to the effects of light particles in the form of dark radiation (DR) at the time of recombination. The conventional observable of effective neutrino number, N {sub eff}, is insufficient for probing generic, interacting models of DR. In this work, we perform likelihood analyses which allow both free-streaming effective neutrinos (parametrized by N {sub eff}) and interacting effective neutrinos (parametrized by N {sub fld}). We motivate an alternative parametrization of DR in terms of N {sub tot} (total effective number of neutrinos) and f {sub fs} (the fraction of effective neutrinos which are free-streaming),more » which is less degenerate than using N {sub eff} and N {sub fld}. Using the Planck 2015 likelihoods in conjunction with measurements of baryon acoustic oscillations (BAO), we find constraints on the total amount of beyond the Standard Model effective neutrinos (both free-streaming and interacting) of Δ N {sub tot} < 0.39 at 2σ. In addition, we consider the possibility that this scenario alleviates the tensions between early-time and late-time cosmological observations, in particular the measurements of σ{sub 8} (the amplitude of matter power fluctuations at 8 h {sup −1} Mpc), finding a mild preference for interactions among light species. We further forecast the sensitivities of a variety of future experiments, including Advanced ACTPol (a representative CMB Stage-III experiment), CMB Stage-IV, and the Euclid satellite. This study is relevant for probing non-standard neutrino physics as well as a wide variety of new particle physics models beyond the Standard Model that involve dark radiation.« less
NASA Astrophysics Data System (ADS)
Tresser, Shachar; Dolev, Amit; Bucher, Izhak
2018-02-01
High-speed machinery is often designed to pass several "critical speeds", where vibration levels can be very high. To reduce vibrations, rotors usually undergo a mass balancing process, where the machine is rotated at its full speed range, during which the dynamic response near critical speeds can be measured. High sensitivity, which is required for a successful balancing process, is achieved near the critical speeds, where a single deflection mode shape becomes dominant, and is excited by the projection of the imbalance on it. The requirement to rotate the machine at high speeds is an obstacle in many cases, where it is impossible to perform measurements at high speeds, due to harsh conditions such as high temperatures and inaccessibility (e.g., jet engines). This paper proposes a novel balancing method of flexible rotors, which does not require the machine to be rotated at high speeds. With this method, the rotor is spun at low speeds, while subjecting it to a set of externally controlled forces. The external forces comprise a set of tuned, response dependent, parametric excitations, and nonlinear stiffness terms. The parametric excitation can isolate any desired mode, while keeping the response directly linked to the imbalance. A software controlled nonlinear stiffness term limits the response, hence preventing the rotor to become unstable. These forces warrant sufficient sensitivity required to detect the projection of the imbalance on any desired mode without rotating the machine at high speeds. Analytical, numerical and experimental results are shown to validate and demonstrate the method.
Advanced Technology Lifecycle Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Mankins, John C.
2004-01-01
Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is satisfied with the system configurations, technology portfolios, and deployment strategies, he or she can present the concepts to a team, which will conduct a detailed, discipline-oriented analysis within a CEE. An analog to this approach is the music industry where a songwriter creates the lyrics and music before entering a recording studio.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-06-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter trade-off, arising from the simultaneous variations of different physical parameters, which increase the nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parametrization and acquisition arrangement. An appropriate choice of model parametrization is important to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parametrizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) data for unconventional heavy oil reservoir characterization. Six model parametrizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^' }) and velocity-impedance-II (α″, β″ and I_S^' }). We begin analysing the interparameter trade-off by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. We discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter trade-offs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter trade-offs for various model parametrizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parametrization, the inverted density profile can be overestimated, underestimated or spatially distorted. Among the six cases, only the velocity-density parametrization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
The representation of object viewpoint in human visual cortex.
Andresen, David R; Vinberg, Joakim; Grill-Spector, Kalanit
2009-04-01
Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0 degrees-180 degrees ), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradigms, object-selective regions recovered from adaptation when a rotated view of an object was shown after adaptation to a specific view of that object, suggesting that representations are sensitive to object rotation. However, we found evidence for differential representations across categories and ventral stream regions. Rotation cross-adaptation was larger for animals than vehicles, suggesting higher sensitivity to vehicle than animal rotation, and was largest in the left fusiform/occipito-temporal sulcus (pFUS/OTS), suggesting that this region has low sensitivity to rotation. Moreover, right pFUS/OTS and FFA responded more strongly to front than back views of animals (without adaptation) and rotation cross-adaptation depended both on the level of rotation and the adapting view. This result suggests a prevalence of neurons that prefer frontal views of animals in fusiform regions. Using a computational model of view-tuned neurons, we demonstrate that differential neural view tuning widths and relative distributions of neural-tuned populations in fMRI voxels can explain the fMRI results. Overall, our findings underscore the utility of parametric approaches for studying the neural basis of object invariance and suggest that there is no complete invariance to object view in the human ventral stream.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
2010-02-01
98 8.4.5 Training Screening ............................. .................................................................99 8.5 Experimental...associated with the proposed parametric model. Several im- portant issues are discussed, including model order selection, training screening , and time...parameters associated with the NS-AR model. In addition, we develop model order selection, training screening , and time-series based whitening and
USDA-ARS?s Scientific Manuscript database
This study reports the use of crude glycerine from biodiesel production in the glycerolysis process and presents the associated parametric and energy analyses. The potential of glycerolysis as an alternative pretreatment method for high free fatty acid (FFA) containing fats, oils and greases (FOGs) ...
1980-06-01
problems, a parametric model was built which uses the TI - 59 programmable calculator as its ve- hicle. Although the calculator has many disadvantages for...previous experience using the TI 59 programmable calculator . For example, explicit instructions for reading cards into the memory set will not be given
NASA Technical Reports Server (NTRS)
Haj-Ali, Rami; Aboudi, Jacob
2012-01-01
The recent two-dimensional (2-D) parametric formulation of the high fidelity generalized method of cells (HFGMC) reported by the authors is generalized for the micromechanical analysis of three-dimensional (3-D) multiphase composites with periodic microstructure. Arbitrary hexahedral subcell geometry is developed to discretize a triply periodic repeating unit-cell (RUC). Linear parametric-geometric mapping is employed to transform the arbitrary hexahedral subcell shapes from the physical space to an auxiliary orthogonal shape, where a complete quadratic displacement expansion is performed. Previously in the 2-D case, additional three equations are needed in the form of average moments of equilibrium as a result of the inclusion of the bilinear terms. However, the present 3-D parametric HFGMC formulation eliminates the need for such additional equations. This is achieved by expressing the coefficients of the full quadratic polynomial expansion of the subcell in terms of the side or face average-displacement vectors. The 2-D parametric and orthogonal HFGMC are special cases of the present 3-D formulation. The continuity of displacements and tractions, as well as the equilibrium equations, are imposed in the average (integral) sense as in the original HFGMC formulation. Each of the six sides (faces) of a subcell has an independent average displacement micro-variable vector which forms an energy-conjugate pair with the transformed average-traction vector. This allows generating symmetric stiffness matrices along with internal resisting vectors for the subcells which enhances the computational efficiency. The established new parametric 3-D HFGMC equations are formulated and solution implementations are addressed. Several applications for triply periodic 3-D composites are presented to demonstrate the general capability and varsity of the present parametric HFGMC method for refined micromechanical analysis by generating the spatial distributions of local stress fields. These applications include triply periodic composites with inclusions in the form of a cavity, spherical inclusion, ellipsoidal inclusion, discontinuous aligned short fiber. A 3-D repeating unit-cell for foam material composite is simulated.
KRASH Parametric Sensitivity Study - Transport Category Airplanes
1987-12-01
8217_ COPILOT PELVIS w Uj 0 •-’-AV - 24.8 -10 .Y +10 VERTICAL ACCELERATION, -i 0 COPILOT PELVIS uUoAV 4 4 -10 2891 "+15 VERTICAL ACCELERATION...j 0 " -- PILOT PELVIS Uw - - AV =47,9 • ----- 4 -15 AV INCREMENTAL VELOCITY CHANGE, FT/SEC Figure 3-65. DC-7 Test, Measured Acceleration, Eight
Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2011-01-01
Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…
D.J. Nicolsky; V.E. Romanovsky; G.G. Panteleev
2008-01-01
A variational data assimilation algorithm is developed to reconstruct thermal properties, porosity, and parametrization of the unfrozen water content for fully saturated soils. The algorithm is tested with simulated synthetic temperatures. The simulations are performed to determine the robustness and sensitivity of algorithm to estimate soil properties from in-situ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.
2016-03-02
This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less
Engine System Loads Development for the Fastrac 60K Flight Engine
NASA Technical Reports Server (NTRS)
Frady, Greg; Christensen, Eric R.; Mims, Katherine; Harris, Don; Parks, Russell; Brunty, Joseph
2000-01-01
Early implementation of structural dynamics finite element analyses for calculation of design loads is considered common design practice for high volume manufacturing industries such as automotive and aeronautical industries. However, with the rarity of rocket engine development programs starts, these tools are relatively new to the design of rocket engines. In the new Fastrac engine program, the focus has been to reduce the cost to weight ratio; current structural dynamics analysis practices were tailored in order to meet both production and structural design goals. Perturbation of rocket engine design parameters resulted in a number of Fastrac load cycles necessary to characterize the impact due to mass and stiffness changes. Evolution of loads and load extraction methodologies, parametric considerations and a discussion of load path sensitivities are discussed.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Revisiting dark energy models using differential ages of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H {sub 0} {sub dt} / dz and its derivative ( H {sub 0} {sub d} {sup 2} {sup t} / dz {sup 2}) using a model independent technique called non-parametric smoothing . Basically, dt / dz is the change in the age of the object as a function of redshift whichmore » is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H ( z ) data. Further, we calculate H {sub 0} {sub dt} / dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H {sub 0} {sub dt} / dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H {sub 0} {sub dt} / dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z ≥ 0.3.« less
Zilverstand, Anna; Sorger, Bettina; Kaemingk, Anita; Goebel, Rainer
2017-06-01
We employed a novel parametric spider picture set in the context of a parametric fMRI anxiety provocation study, designed to tease apart brain regions involved in threat monitoring from regions representing an exaggerated anxiety response in spider phobics. For the stimulus set, we systematically manipulated perceived proximity of threat by varying a depicted spider's context, size, and posture. All stimuli were validated in a behavioral rating study (phobics n = 20; controls n = 20; all female). An independent group participated in a subsequent fMRI anxiety provocation study (phobics n = 7; controls n = 7; all female), in which we compared a whole-brain categorical to a whole-brain parametric analysis. Results demonstrated that the parametric analysis provided a richer characterization of the functional role of the involved brain networks. In three brain regions-the mid insula, the dorsal anterior cingulate, and the ventrolateral prefrontal cortex-activation was linearly modulated by perceived proximity specifically in the spider phobia group, indicating a quantitative representation of an exaggerated anxiety response. In other regions (e.g., the amygdala), activation was linearly modulated in both groups, suggesting a functional role in threat monitoring. Prefrontal regions, such as dorsolateral prefrontal cortex, were activated during anxiety provocation but did not show a stimulus-dependent linear modulation in either group. The results confirm that brain regions involved in anxiety processing hold a quantitative representation of a pathological anxiety response and more generally suggest that parametric fMRI designs may be a very powerful tool for clinical research in the future, particularly when developing novel brain-based interventions (e.g., neurofeedback training). Hum Brain Mapp 38:3025-3038, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
1996-01-01
We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Novel parametric reduced order model for aeroengine blade dynamics
NASA Astrophysics Data System (ADS)
Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis
2015-10-01
The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.
Pluripotency gene network dynamics: System views from parametric analysis.
Akberdin, Ilya R; Omelyanchuk, Nadezda A; Fadeev, Stanislav I; Leskova, Natalya E; Oschepkova, Evgeniya A; Kazantsev, Fedor V; Matushkin, Yury G; Afonnikov, Dmitry A; Kolchanov, Nikolay A
2018-01-01
Multiple experimental data demonstrated that the core gene network orchestrating self-renewal and differentiation of mouse embryonic stem cells involves activity of Oct4, Sox2 and Nanog genes by means of a number of positive feedback loops among them. However, recent studies indicated that the architecture of the core gene network should also incorporate negative Nanog autoregulation and might not include positive feedbacks from Nanog to Oct4 and Sox2. Thorough parametric analysis of the mathematical model based on this revisited core regulatory circuit identified that there are substantial changes in model dynamics occurred depending on the strength of Oct4 and Sox2 activation and molecular complexity of Nanog autorepression. The analysis showed the existence of four dynamical domains with different numbers of stable and unstable steady states. We hypothesize that these domains can constitute the checkpoints in a developmental progression from naïve to primed pluripotency and vice versa. During this transition, parametric conditions exist, which generate an oscillatory behavior of the system explaining heterogeneity in expression of pluripotent and differentiation factors in serum ESC cultures. Eventually, simulations showed that addition of positive feedbacks from Nanog to Oct4 and Sox2 leads mainly to increase of the parametric space for the naïve ESC state, in which pluripotency factors are strongly expressed while differentiation ones are repressed.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Creating A Data Base For Design Of An Impeller
NASA Technical Reports Server (NTRS)
Prueger, George H.; Chen, Wei-Chung
1993-01-01
Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.
Nonlinear Analysis of Mechanical Systems Under Combined Harmonic and Stochastic Excitation
1993-05-27
Namachchivaya and Naresh Malhotra Department of Aeronautical and Astronautical Engineering University of Illinois, Urbana-Champaign Urbana, Illinois...Aeronauticai and Astronautical Engineering, University of Illinois, 1991. 2. N. Sri Namachchivaya and N. Malhotra , Parametrically Excited Hopf Bifurcation...Namachchivaya and N. Malhotra , Parametrically Excited Hopf Bifurcation with Non-semisimple 1:1 Resonance, Nonlinear Vibrations, ASME-AMD, Vol. 114, 1992. 3
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charbonneau-Lefort, Mathieu; Afeyan, Bedros; Fejer, M. M.
Optical parametric amplifiers using chirped quasi-phase-matching (QPM) gratings offer the possibility of engineering the gain and group delay spectra. We give practical formulas for the design of such amplifiers. We consider linearly chirped QPM gratings providing constant gain over a broad bandwidth, sinusoidally modulated profiles for selective frequency amplification and a pair of QPM gratings working in tandem to ensure constant gain and constant group delay at the same time across the spectrum. Finally, the analysis is carried out in the frequency domain using Wentzel–Kramers–Brillouin analysis.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Sivasankar, P; Rajesh Kanna, A; Suresh Kumar, G; Gummadi, Sathyanarayana N
2016-07-01
pH and resident time of injected slug plays a critical role in characterizing the reservoir for potential microbial enhanced oil recovery (MEOR) application. To investigate MEOR processes, a multispecies (microbes-nutrients) reactive transport model in porous media was developed by coupling kinetic and transport model. The present work differs from earlier works by explicitly determining parametric values required for kinetic model by experimental investigations using Pseudomonas putida at different pH conditions and subsequently performing sensitivity analysis of pH, resident time and water saturation on concentrations of microbes, nutrients and biosurfactant within reservoir. The results suggest that nutrient utilization and biosurfactant production are found to be maximum at pH 8 and 7.5 respectively. It is also found that the sucrose and biosurfactant concentrations are highly sensitive to pH rather than reservoir microbial concentration, while at larger resident time and water saturation, the microbial and nutrient concentrations were lesser due to enhanced dispersion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Definition study for photovoltaic residential prototype system
NASA Technical Reports Server (NTRS)
Shepard, N. F.; Landes, R.; Kornrumpf, W. P.
1976-01-01
A site evaluation was performed to assess the relative merits of different regions of the country in terms of the suitability for experimental photovoltaic powered residences. Eight sites were selected based on evaluation criteria which included population, photovoltaic systems performance and the cost of electrical energy. A parametric sensitivity analysis was performed for four selected site locations. Analytical models were developed for four different power system implementation approaches. Using the model which represents a direct (or float) charge system implementation the performance sensitivity to the following parameter variations is reported: (1) solar roof slope angle; (2) ratio of the number of series cells in the solar array to the number of series cells in the lead-acid battery; and (3) battery size. For a Cleveland site location, a system with no on site energy storage and with a maximum power tracking inverter which feeds back excess power to the utility was shown to have 19 percent greater net system output than the second place system. The experiment test plan is described. The load control and data acquisition system and the data display panel for the residence are discussed.
NASA Astrophysics Data System (ADS)
Segalini, Andrea; Ferrero, Anna Maria; Brighenti, Roberto
2013-04-01
A channelized debris flow is usually represented by a mixture of solid particles of various sizes and water, flowing along a laterally confined inclined channel-shaped region up to an unconfined area where it slow down its motion and spreads out into a flat-shaped mass. The study of these phenomena is very difficult due to their short duration and unpredictability, lack of historical data for a given basin and complexity of the involved mechanical phenomena. The post event surveys allow for the identification of some depositional features and provide indication about the maximum flow height; however they lack information about development of the phenomena with time. For this purpose the monitoring of recursive events has been carried out by several Authors. Most of the studies, aimed at the determination of the characteristic features of a debris flow, were carried out in artificial channels, where the main involved variables were measured and other where controlled during the tests; however, some uncertainties remained and other scaled models where developed to simulate the deposition mechanics as well as to analyze the transportation mechanics and the energy dissipation. The assessment of the mechanical behavior of the protection structures upon impact with the flow as well as the energy associated to it are necessary for the proper design of such structures that, in densely populated area, can avoid victims and limit the destructive effects of such a phenomenon. In this work a simplified structural model, developed by the Authors for the safety assessment of retention barrier against channelized debris flow, is presented and some parametric cases are interpreted through the proposed approach; this model is developed as a simplified and efficient tool to be used for the verification of the supporting cables and foundations of a flexible debris flow barrier. The present analytical and numerical-based approach has a different aim of a FEM model. The computational experiences by using FEM modeling for these kind of structures, had shown that a large amount of time for both the geometrical setup of the model and its computation is necessary. The big effort required by FEM for this class of problems limits the actual possibility to investigate different geometrical configurations, load schemes etc. and it is suitable to represent a specific configuration but it does not allow for investigation of the influence of parameter changes. On the other hand parametrical analysis are common practice in geotechnical design for the quoted reasons. Consequently, the Authors felt the need to develop a simplified method (which is not yet available in our knowledge) that allow to perform several parametrical analysis in a limited time. It should be noted that, in this paper, no consideration regarding the mechanical and physical behavior of debris flows are carried out; the proposed model requires the input of parameters that must be acquired through a preliminary characterization of the design event. However, adopting the proposed tool, the designer will be able to perform sensitivity analysis that will help in quantify the influence of parameters variability as commonly occurs in geotechnical design.
Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer's disease.
Lee, Spencer A W; Sposato, Luciano A; Hachinski, Vladimir; Cipriano, Lauren E
2017-03-16
Accurate and timely diagnosis of Alzheimer's disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of $11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over $50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers.
Evolution of spherical cavitation bubbles: Parametric and closed-form solutions
NASA Astrophysics Data System (ADS)
Mancas, Stefan C.; Rosu, Haret C.
2016-02-01
We present an analysis of the Rayleigh-Plesset equation for a three dimensional vacuous bubble in water. In the simplest case when the effects of surface tension are neglected, the known parametric solutions for the radius and time evolution of the bubble in terms of a hypergeometric function are briefly reviewed. By including the surface tension, we show the connection between the Rayleigh-Plesset equation and Abel's equation, and obtain the parametric rational Weierstrass periodic solutions following the Abel route. In the same Abel approach, we also provide a discussion of the nonintegrable case of nonzero viscosity for which we perform a numerical integration.
Analysis of a Rocket Based Combined Cycle Engine during Rocket Only Operation
NASA Technical Reports Server (NTRS)
Smith, T. D.; Steffen, C. J., Jr.; Yungster, S.; Keller, D. J.
1998-01-01
The all rocket mode of operation is a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. However, outside of performing experiments or a full three dimensional analysis, there are no first order parametric models to estimate performance. As a result, an axisymmetric RBCC engine was used to analytically determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and statistical regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, percent of injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inject diameter ratio. A perfect gas computational fluid dynamics analysis was performed to obtain values of vacuum specific impulse. Statistical regression analysis was performed based on both full flow and gas generator engine cycles. Results were also found to be dependent upon the entire cycle assumptions. The statistical regression analysis determined that there were five significant linear effects, six interactions, and one second-order effect. Two parametric models were created to provide performance assessments of an RBCC engine in the all rocket mode of operation.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.
George, Brandon; Aban, Inmaculada
2015-01-15
Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2012-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2013-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.
ERIC Educational Resources Information Center
Pustejovsky, James E.
2018-01-01
A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap…
OBIST methodology incorporating modified sensitivity of pulses for active analogue filter components
NASA Astrophysics Data System (ADS)
Khade, R. H.; Chaudhari, D. S.
2018-03-01
In this paper, oscillation-based built-in self-test method is used to diagnose catastrophic and parametric faults in integrated circuits. Sallen-Key low pass filter and high pass filter circuits with different gains are used to investigate defects. Variation in seven parameters of operational amplifier (OP-AMP) like gain, input impedance, output impedance, slew rate, input bias current, input offset current, input offset voltage and catastrophic as well as parametric defects in components outside OP-AMP are introduced in the circuit and simulation results are analysed. Oscillator output signal is converted to pulses which are used to generate a signature of the circuit. The signature and pulse count changes with the type of fault present in the circuit under test (CUT). The change in oscillation frequency is observed for fault detection. Designer has flexibility to predefine tolerance band of cut-off frequency and range of pulses for which circuit should be accepted. The fault coverage depends upon the required tolerance band of the CUT. We propose a modification of sensitivity of parameter (pulses) to avoid test escape and enhance yield. Result shows that the method provides 100% fault coverage for catastrophic faults.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sudha, M.; Radha, S.; Kirubaveni, S.; Kiruthika, R.; Govindaraj, R.; Santhosh, N.
2018-04-01
Nano crystalline undoped (1Z) Zinc Oxide (ZnO) and 5, 10 and 15 Wt. % (1ZN, 2ZN and 3ZN) of Nickel doped ZnO based sensors were fabricated using the hydrothermal approach on Fluorine doped Tin Oxide (FTO) glass substrates. X-ray diffraction (XRD) analysis proved the hexagonal Wurtzite structure of ZnO. Parametric variations in terms of dislocation density, bond length, lattice parameters and micro strain with respect to dopant concentration were analysed. The prominent variations in the crystallite size, optical band gap and Photoluminescence peak ratio of devices fabricated was observed. The Field Emission Scanning Electron Microscope (FESEM) images showed a change in diameter and density of the nanorods. The effect of the operating temperature, concentration of ethanol and the different doping levels of sensitivity, response and recovery time were investigated. It was inferred that 376% of sensitivity with a very quick response and recovery time of <5 s and 10 s respectively at 150 °C of 3ZN sensor has better performance compared to other three sensors. Also 3ZN sensor showed improved sensitivity of 114%, even at room temperature with response and recovery time of 35 s and 45 s respectively.
NASA Technical Reports Server (NTRS)
Malpica, Carlos; Greenwood, Eric; Sim, Ben
2016-01-01
At the most fundamental level, main rotor loading noise is caused by the harmonically-varying aerodynamic loads (acoustic pressures) exerted by the rotating blades on the air. Rotorcraft main rotor noise is therefore, in principle, a function of rotor control inputs, and thus the forces and moments required to achieve steady, or "trim", flight equilibrium. In certain flight conditions, the ensuing aerodynamic loading on the rotor(s) can result in highly obtrusive harmonic noise. The effect of the propulsive force, or X-force, on Blade-Vortex Interaction (BVI) noise is well documented. This paper presents an acoustics parametric sensitivity analysis of the effect of varying rotor aerodynamic pitch hub trim moments on BVI noise radiated by an S-70 helicopter main rotor. Results show that changing the hub pitching moment for an isolated rotor, trimmed in nominal 80 knot, 6 and 12 deg descent, flight conditions, alters the miss distance between the blades and the vortex in ways that have varied and noticeable effects on the BVI radiated-noise directionality. Peak BVI noise level is however not significantly altered. The application of hub pitching moment allows the attitude of the fuselage to be controlled; for example, to compensate for the uncomfortable change in fuselage pitch attitude introduced by a fuselage-mounted X-force controller.
Comparison of Salmonella enteritidis phage types isolated from layers and humans in Belgium in 2005.
Welby, Sarah; Imberechts, Hein; Riocreux, Flavien; Bertrand, Sophie; Dierick, Katelijne; Wildemauwe, Christa; Hooyberghs, Jozef; Van der Stede, Yves
2011-08-01
The aim of this study was to investigate the available results for Belgium of the European Union coordinated monitoring program (2004/665 EC) on Salmonella in layers in 2005, as well as the results of the monthly outbreak reports of Salmonella Enteritidis in humans in 2005 to identify a possible statistical significant trend in both populations. Separate descriptive statistics and univariate analysis were carried out and the parametric and/or non-parametric hypothesis tests were conducted. A time cluster analysis was performed for all Salmonella Enteritidis phage types (PTs) isolated. The proportions of each Salmonella Enteritidis PT in layers and in humans were compared and the monthly distribution of the most common PT, isolated in both populations, was evaluated. The time cluster analysis revealed significant clusters during the months May and June for layers and May, July, August, and September for humans. PT21, the most frequently isolated PT in both populations in 2005, seemed to be responsible of these significant clusters. PT4 was the second most frequently isolated PT. No significant difference was found for the monthly trend evolution of both PT in both populations based on parametric and non-parametric methods. A similar monthly trend of PT distribution in humans and layers during the year 2005 was observed. The time cluster analysis and the statistical significance testing confirmed these results. Moreover, the time cluster analysis showed significant clusters during the summer time and slightly delayed in time (humans after layers). These results suggest a common link between the prevalence of Salmonella Enteritidis in layers and the occurrence of the pathogen in humans. Phage typing was confirmed to be a useful tool for identifying temporal trends.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Inverse Thermal Analysis of Titanium GTA Welds Using Multiple Constraints
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Shabaev, A.; Huang, L.
2015-06-01
Inverse thermal analysis of titanium gas-tungsten-arc welds using multiple constraint conditions is presented. This analysis employs a methodology that is in terms of numerical-analytical basis functions for inverse thermal analysis of steady-state energy deposition in plate structures. The results of this type of analysis provide parametric representations of weld temperature histories that can be adopted as input data to various types of computational procedures, such as those for prediction of solid-state phase transformations. In addition, these temperature histories can be used to construct parametric function representations for inverse thermal analysis of welds corresponding to other process parameters or welding processes whose process conditions are within similar regimes. The present study applies an inverse thermal analysis procedure that provides for the inclusion of constraint conditions associated with both solidification and phase transformation boundaries.
Sensitivity of the Lidar ratio to changes in size distribution and index of refraction
NASA Technical Reports Server (NTRS)
Evans, B. T. N.
1986-01-01
In order to invert lidar signals to obtain reliable extinction coefficients, sigma, a relationship between sigma and the backscatter coefficient, beta, must be given. These two coefficients are linearly related if the complex index of refraction, m, particle shape size distribution, N, does not change along the path illuminated by the laser beam. This, however, is generally not the case. An extensive Mie computation of the lidar ratio R = beta/sigma and the sensitivity of R to the changes in a parametric space defined by N and m were examined.
NASA Astrophysics Data System (ADS)
Li, Zhen; Liu, Hongjun; Huang, Nan; Wang, Zhaolu; Han, Jing
2018-06-01
The phase-sensitive amplification process of a hybrid graphene–silicon (HyGS) slot waveguide with trilayers of graphene is investigated in this paper. Numerical simulation shows that a relatively high extinction ratio (42 dB) is achieved, because of the ultrahigh nonlinear coefficients, with a waveguide length of only 680 µm. In addition, the graphene layer provides the possibility of modulating the phase status and gain of the output signal. This study is expected to be highly beneficial to applications such as integrated optics and graphene-related active optical devices.
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Potency control of modified live viral vaccines for veterinary use.
Terpstra, C; Kroese, A H
1996-04-01
This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.
Potency control of modified live viral vaccines for veterinary use.
Terpstra, C; Kroese, A H
1996-01-01
This paper reviews various aspects of efficacy, and methods for assaying the potency of modified live viral vaccines. The pros and cons of parametric versus non-parametric methods for analysis of potency assays are discussed and critical levels of protection, as determined by the target(s) of vaccination, are exemplified. Recommendations are presented for designing potency assays on master virus seeds and vaccine batches.
NASA Astrophysics Data System (ADS)
Belkić, Dževad; Belkić, Karen
2018-01-01
This paper on molecular imaging emphasizes improving specificity of magnetic resonance spectroscopy (MRS) for early cancer diagnostics by high-resolution data analysis. Sensitivity of magnetic resonance imaging (MRI) is excellent, but specificity is insufficient. Specificity is improved with MRS by going beyond morphology to assess the biochemical content of tissue. This is contingent upon accurate data quantification of diagnostically relevant biomolecules. Quantification is spectral analysis which reconstructs chemical shifts, amplitudes and relaxation times of metabolites. Chemical shifts inform on electronic shielding of resonating nuclei bound to different molecular compounds. Oscillation amplitudes in time signals retrieve the abundance of MR sensitive nuclei whose number is proportional to metabolite concentrations. Transverse relaxation times, the reciprocal of decay probabilities of resonances, arise from spin-spin coupling and reflect local field inhomogeneities. In MRS single voxels are used. For volumetric coverage, multi-voxels are employed within a hybrid of MRS and MRI called magnetic resonance spectroscopic imaging (MRSI). Common to MRS and MRSI is encoding of time signals and subsequent spectral analysis. Encoded data do not provide direct clinical information. Spectral analysis of time signals can yield the quantitative information, of which metabolite concentrations are the most clinically important. This information is equivocal with standard data analysis through the non-parametric, low-resolution fast Fourier transform and post-processing via fitting. By applying the fast Padé transform (FPT) with high-resolution, noise suppression and exact quantification via quantum mechanical signal processing, advances are made, presented herein, focusing on four areas of critical public health importance: brain, prostate, breast and ovarian cancers.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
The detection of pleural effusion using a parametric EIT technique.
Arad, M; Zlochiver, S; Davidson, T; Shoenfeld, Y; Adunsky, A; Abboud, S
2009-04-01
The bioimpedance technique provides a safe, low-cost and non-invasive alternative for routine monitoring of lung fluid levels in patients. In this study we have investigated the feasibility of bioimpedance measurements to monitor pleural effusion (PE) patients. The measurement system (eight-electrode thoracic belt, opposite sequential current injections, 3 mA, 20 kHz) employed a parametric reconstruction algorithm to assess the left and right lung resistivity values. Bioimpedance measurements were taken before and after the removal of pleural fluids, while the patient was sitting at rest during tidal respiration in order to minimize movements of the thoracic cavity. The mean resistivity difference between the lung on the side with PE and the lung on the other side was -48 Omega cm. A high correlation was found between the mean lung resistivity value before the removal of the fluids and the volume of pleural fluids removed, with a sensitivity of -0.17 Omega cm ml(-1) (linear regression, R=0.53). The present study further supports the feasibility and applicability of the bioimpedance technique, and specifically the approach of parametric left and right lung resistivity reconstruction, in monitoring lung patients.
Implementation of Instrumental Variable Bounds for Data Missing Not at Random.
Marden, Jessica R; Wang, Linbo; Tchetgen, Eric J Tchetgen; Walter, Stefan; Glymour, M Maria; Wirth, Kathleen E
2018-05-01
Instrumental variables are routinely used to recover a consistent estimator of an exposure causal effect in the presence of unmeasured confounding. Instrumental variable approaches to account for nonignorable missing data also exist but are less familiar to epidemiologists. Like instrumental variables for exposure causal effects, instrumental variables for missing data rely on exclusion restriction and instrumental variable relevance assumptions. Yet these two conditions alone are insufficient for point identification. For estimation, researchers have invoked a third assumption, typically involving fairly restrictive parametric constraints. Inferences can be sensitive to these parametric assumptions, which are typically not empirically testable. The purpose of our article is to discuss another approach for leveraging a valid instrumental variable. Although the approach is insufficient for nonparametric identification, it can nonetheless provide informative inferences about the presence, direction, and magnitude of selection bias, without invoking a third untestable parametric assumption. An important contribution of this article is an Excel spreadsheet tool that can be used to obtain empirical evidence of selection bias and calculate bounds and corresponding Bayesian 95% credible intervals for a nonidentifiable population proportion. For illustrative purposes, we used the spreadsheet tool to analyze HIV prevalence data collected by the 2007 Zambia Demographic and Health Survey (DHS).
Bearing tester data compilation, analysis, and reporting and bearing math modeling
NASA Technical Reports Server (NTRS)
1986-01-01
A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.
NASA Technical Reports Server (NTRS)
Yang, Y. L.; Tan, C. S.; Hawthorne, W. R.
1992-01-01
A computational method, based on a theory for turbomachinery blading design in three-dimensional inviscid flow, is applied to a parametric design study of a radial inflow turbine wheel. As the method requires the specification of swirl distribution, a technique for its smooth generation within the blade region is proposed. Excellent agreements have been obtained between the computed results from this design method and those from direct Euler computations, demonstrating the correspondence and consistency between the two. The computed results indicate the sensitivity of the pressure distribution to a lean in the stacking axis and a minor alteration in the hub/shroud profiles. Analysis based on Navier-Stokes solver shows no breakdown of flow within the designed blade passage and agreement with that from design calculation; thus the flow in the designed turbine rotor closely approximates that of an inviscid one. These calculations illustrate the use of a design method coupled to an analysis tool for establishing guidelines and criteria for designing turbomachinery blading.
Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.
2005-01-01
NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.
Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.
2007-01-01
NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.
Parametric study of supersonic STOVL flight characteristics
NASA Technical Reports Server (NTRS)
Rapp, David C.
1985-01-01
A number of different control devices and techniques are evaluated to determine their suitability for increasing the short takeoff performance of a supersonic short-takeoff/vertical landing (STOVL) aircraft. Analysis was based on a rigid-body mathematical model of the General Dynamics E-7, a single engine configuration that utilizes ejectors and thrust deflection for propulsive lift. Alternatives investigated include increased static pitch, the addition of a close-coupled canard, use of boundary layer control to increase the takeoff lift coefficient, and the addition of a vectorable aft fan air nozzle. Other performance studies included the impact of individual E-7 features, the sensitivity to ejector performance, the effect of removing the afterburners, and a determination of optional takeoff and landing transition methods. The results pertain to both the E-7 and other configurations. Several alternatives were not as well suited to the E-7 characteristics as they would be to an alternative configuration, and vice versa. A large amount of supporting data for each analysis is included.
Development of a solar-powered residential air conditioner: Screening analysis
NASA Technical Reports Server (NTRS)
1975-01-01
Screening analysis aimed at the definition of an optimum configuration of a Rankine cycle solar-powered air conditioner designed for residential application were conducted. Initial studies revealed that system performance and cost were extremely sensitive to condensing temperature and to the type of condenser used in the system. Consequently, the screening analyses were concerned with the generation of parametric design data for different condenser approaches; i. e., (1) an ambient air condenser, (2) a humidified ambient air condenser (3) an evaporative condenser, and (4) a water condenser (with a cooling tower). All systems feature a high performance turbocompressor and a single refrigerant (R-11) for the power and refrigeration loops. Data were obtained by computerized methods developed to permit system characterization over a broad range of operating and design conditions. The criteria used for comparison of the candidate system approaches were (1) overall system COP (refrigeration effect/solar heat input), (2) auxiliary electric power for fans and pumps, and (3) system installed cost or cost to the user.
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.
[Spatiotemporal pattern analysis of event-related potentials elicited by emotional Stroop task].
Liu, Qi; Liu, Ling; He, Hui; Zhou, Shu
2007-05-01
To investigate the spatiotemporal pattern of event-related potentials (ERPs) induced by emotional Stroop task. The ERPs of 19 channels were recorded from 13 healthy subjects while performing emotional Stroop task by pressing the buttons representing the colors in which the words denoting different emotions were displayed. A repeated-measures factorial design was adopted with three levels (word valence: positive, neutral and negative). The result of ERP analysis was presented in the form of statistical parametric mapping (SPM) of F value. No significant difference was found in either reaction time or accuracy. The SPM of ERPs suggested significant emotional valence effects in the occipital region (200-220 ms), the left and central frontal regions (270-300 ms), and the bilateral temporal and parietal cortex (560-580 and 620-630 ms, respectively). Processing of task-irrelevant emotional valence information involves the dynamic operation of extensive brain regions. The ERPs are more sensitive than the behavioral indices in emotional evaluation.
NASA Astrophysics Data System (ADS)
Contreras-Reyes, Eduardo; Garay, Jeremías
2018-01-01
The outer rise is a topographic bulge seaward of the trench at a subduction zone that is caused by bending and flexure of the oceanic lithosphere as subduction commences. The classic model of the flexure of oceanic lithosphere w (x) is a hydrostatic restoring force acting upon an elastic plate at the trench axis. The governing parameters are elastic thickness Te, shear force V0, and bending moment M0. V0 and M0 are unknown variables that are typically replaced by other quantities such as the height of the fore-bulge, wb, and the half-width of the fore-bulge, (xb - xo). However, this method is difficult to implement with the presence of excessive topographic noise around the bulge of the outer rise. Here, we present an alternative method to the classic model, in which lithospheric flexure w (x) is a function of the flexure at the trench axis w0, the initial dip angle of subduction β0, and the elastic thickness Te. In this investigation, we apply a sensitivity analysis to both methods in order to determine the impact of the differing parameters on the solution, w (x). The parametric sensitivity analysis suggests that stable solutions for the alternative approach requires relatively low β0 values (<15°), which are consistent with the initial dip angles observed in seismic velocity-depth models across convergent margins worldwide. The predicted flexure for both methods are compared with observed bathymetric profiles across the Izu-Mariana trench, where the old and cold Pacific plate is characterized by a pronounced outer rise bulge. The alternative method is a more suitable approach, assuming that accurate geometric information at the trench axis (i.e., w0 and β0) is available.
NASA Astrophysics Data System (ADS)
Prawin, J.; Rama Mohan Rao, A.
2018-01-01
The knowledge of dynamic loads acting on a structure is always required for many practical engineering problems, such as structural strength analysis, health monitoring and fault diagnosis, and vibration isolation. In this paper, we present an online input force time history reconstruction algorithm using Dynamic Principal Component Analysis (DPCA) from the acceleration time history response measurements using moving windows. We also present an optimal sensor placement algorithm to place limited sensors at dynamically sensitive spatial locations. The major advantage of the proposed input force identification algorithm is that it does not require finite element idealization of structure unlike the earlier formulations and therefore free from physical modelling errors. We have considered three numerical examples to validate the accuracy of the proposed DPCA based method. Effects of measurement noise, multiple force identification, different kinds of loading, incomplete measurements, and high noise levels are investigated in detail. Parametric studies have been carried out to arrive at optimal window size and also the percentage of window overlap. Studies presented in this paper clearly establish the merits of the proposed algorithm for online load identification.
Ebner, Jacqueline H; Labatut, Rodrigo A; Rankin, Matthew J; Pronto, Jennifer L; Gooch, Curt A; Williamson, Anahita A; Trabold, Thomas A
2015-09-15
Anaerobic codigestion (AcoD) can address food waste disposal and manure management issues while delivering clean, renewable energy. Quantifying greenhouse gas (GHG) emissions due to implementation of AcoD is important to achieve this goal. A lifecycle analysis was performed on the basis of data from an on-farm AcoD in New York, resulting in a 71% reduction in GHG, or net reduction of 37.5 kg CO2e/t influent relative to conventional treatment of manure and food waste. Displacement of grid electricity provided the largest reduction, followed by avoidance of alternative food waste disposal options and reduced impacts associated with storage of digestate vs undigested manure. These reductions offset digester emissions and the net increase in emissions associated with land application in the AcoD case relative to the reference case. Sensitivity analysis showed that using feedstock diverted from high impact disposal pathways, control of digester emissions, and managing digestate storage emissions were opportunities to improve the AcoD GHG benefits. Regional and parametrized emissions factors for the storage emissions and land application phases would reduce uncertainty.
Parametric study of microwave-powered high-altitude airplane platforms designed for linear flight
NASA Technical Reports Server (NTRS)
Morris, C. E. K., Jr.
1981-01-01
The performance of a class of remotely piloted, microwave powered, high altitude airplane platforms is studied. The first part of each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam; this is followed by gliding flight back to a minimum altitude above a microwave station and initiation of another cycle. Parametric variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the energy transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and the increase of lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.
NASA Technical Reports Server (NTRS)
Randall, D. A.; Abeles, J. A.; Corsetti, T. G.
1985-01-01
The formulation of the planetary boundary layer (PBL) and stratocumulus parametrizations in the UCLA general circulation model (GCM) are briefly summarized, and extensive new results are presented illustrating some aspects of the simulated seasonal changes of the global distributions of PBL depth, stratocumulus cloudiness, cloud-top entrainment instability, the cumulus mass flux, and related fields. Results from three experiments designed to reveal the sensitivity of the GCM results to aspects of the PBL and stratocumulus parametrizations are presented. The GCM results show that the layer cloud instability appears to limit the extent of the marine subtropical stratocumulus regimes, and that instability frequently occurs in association with cumulus convection over land. Cumulus convection acts as a very significant sink of PBL mass throughout the tropics and over the midlatitude continents in winter.
Photon statistics of shot noise measured using a Josephson parametric amplifier
NASA Astrophysics Data System (ADS)
Simoneau, Jean Olivier; Virally, Stéphane; Lupien, Christian; Reulet, Bertrand
2015-03-01
Quantum measurements are very sensitive to external noise sources. Such measurements require careful amplification chain design so as not to overwhelm the signal with extraneous noise. A quantum-limited amplifier, like the Josephson parametric amplifier (paramp), is thus an ideal candidate for this purpose. We used a paramp to investigate the quantum noise of a tunnel junction. This measurement scheme allowed us to improve upon previous observations of shot noise by an order of magnitude in terms of noise temperature. With this setup, we have measured the second and fourth cumulants of current fluctuations generated by the tunnel junction within a 40 MHz bandwidth around 6 GHz. From theses measurements, we deduce the variance of the photon number fluctuations for various bias schemes of the junction. In particular, we investigate the regime where the junction emits pairs of photons.
Cooling optically levitated dielectric nanoparticles via parametric feedback
NASA Astrophysics Data System (ADS)
Neukirch, Levi; Rodenburg, Brandon; Bhattacharya, Mishkatul; Vamivakas, Nick
2015-05-01
The inability to leverage resonant scattering processes involving internal degrees of freedom differentiates optical cooling experiments performed with levitated dielectric nanoparticles, from similar atomic and molecular traps. Trapping in optical cavities or the application of active feedback techniques have proven to be effective ways to circumvent this limitation. We present our nanoparticle optical cooling apparatus, which is based on parametric feedback modulation of a single-beam gradient force optical trap. This scheme allows us to achieve effective center-of-mass temperatures well below 1 kelvin for our ~ 1 ×10-18 kg particles, at modest vacuum pressures. The method provides a versatile platform, with parameter tunability not found in conventional tethered nanomechanical systems. Potential applications include investigations of nonequilibrium nanoscale thermodynamics, ultra-sensitive force metrology, and mesoscale quantum mechanics and hybrid systems. Supported by the office of Naval Research award number N000141410442.
Design constraints of the LST fine guidance sensor
NASA Technical Reports Server (NTRS)
Wissinger, A. B.
1975-01-01
The LST Fine Guidance Sensor design is shaped by the rate of occurrence of suitable guide stars, the competition for telescope focal plane space with the Science Instruments, and the sensitivity of candidate image motion sensors. The relationship between these parameters is presented, and sensitivity to faint stars is shown to be of prime importance. An interferometric technique of image motion sensing is shown to have improved sensitivity and, therefore, a reduced focal plane area requirement in comparison with other candidate techniques (image-splitting prism and image dissector tube techniques). Another design requirement is speed in acquiring the guide star in order to maximize the time available for science observations. The design constraints are shown parametrically, and modelling results are presented.
NASA Astrophysics Data System (ADS)
Hendikawati, P.; Arifudin, R.; Zahid, M. Z.
2018-03-01
This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
Rayleigh-type parametric chemical oscillation.
Ghosh, Shyamolina; Ray, Deb Shankar
2015-09-28
We consider a nonlinear chemical dynamical system of two phase space variables in a stable steady state. When the system is driven by a time-dependent sinusoidal forcing of a suitable scaling parameter at a frequency twice the output frequency and the strength of perturbation exceeds a threshold, the system undergoes sustained Rayleigh-type periodic oscillation, wellknown for parametric oscillation in pipe organs and distinct from the usual forced quasiperiodic oscillation of a damped nonlinear system where the system is oscillatory even in absence of any external forcing. Our theoretical analysis of the parametric chemical oscillation is corroborated by full numerical simulation of two well known models of chemical dynamics, chlorite-iodine-malonic acid and iodine-clock reactions.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D
2016-10-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.
A Non-Parametric Probability Density Estimator and Some Applications.
1984-05-01
distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Vidya Sagar, R.; Raghu Prasad, B. K.
2012-03-01
This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.
The sensitivities of in cloud and cloud top phase distributions to primary ice formation in ICON-LEM
NASA Astrophysics Data System (ADS)
Beydoun, H.; Karrer, M.; Tonttila, J.; Hoose, C.
2017-12-01
Mixed phase clouds remain a leading source of uncertainty in our attempt to quantify cloud-climate and aerosol-cloud climate interactions. Nevertheless, recent advances in parametrizing the primary ice formation process, high resolution cloud modelling, and retrievals of cloud phase distributions from satellite data offer an excellent opportunity to conduct closure studies on the sensitivity of the cloud phase to microphysical and dynamical processes. Particularly, the reliability of satellite data to resolve the phase at the top of the cloud provides a promising benchmark to compare model output to. We run large eddy simulations with the new ICOsahedral Non-hydrostatic atmosphere model (ICON) to place bounds on the sensitivity of in cloud and cloud top phase to the primary ice formation process. State of the art primary ice formation parametrizations in the form of the cumulative ice active site density ns are implemented in idealized deep convective cloud simulations. We exploit the ability of ICON-LEM to switch between a two moment microphysics scheme and the newly developed Predicted Particle Properties (P3) scheme by running our simulations in both configurations for comparison. To quantify the sensitivity of cloud phase to primary ice formation, cloud ice content is evaluated against order of magnitude changes in ns at variable convective strengths. Furthermore, we assess differences between in cloud and cloud top phase distributions as well as the potential impact of updraft velocity on the suppression of the Wegener-Bergeron-Findeisen process. The study aims to evaluate our practical understanding of primary ice formation in the context of predicting the structure and evolution of mixed phase clouds.
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
NASA Astrophysics Data System (ADS)
Korobko, M.; Kleybolte, L.; Ast, S.; Miao, H.; Chen, Y.; Schnabel, R.
2017-04-01
The shot-noise limited peak sensitivity of cavity-enhanced interferometric measurement devices, such as gravitational-wave detectors, can be improved by increasing the cavity finesse, even when comparing fixed intracavity light powers. For a fixed light power inside the detector, this comes at the price of a proportional reduction in the detection bandwidth. High sensitivity over a large span of signal frequencies, however, is essential for astronomical observations. It is possible to overcome this standard sensitivity-bandwidth limit using nonclassical correlations in the light field. Here, we investigate the internal squeezing approach, where the parametric amplification process creates a nonclassical correlation directly inside the interferometer cavity. We theoretically analyze the limits of the approach and measure 36% increase in the sensitivity-bandwidth product compared to the classical case. To our knowledge, this is the first experimental demonstration of an improvement in the sensitivity-bandwidth product using internal squeezing, opening the way for a new class of optomechanical force sensing devices.
Outcome of temporal lobe epilepsy surgery predicted by statistical parametric PET imaging.
Wong, C Y; Geller, E B; Chen, E Q; MacIntyre, W J; Morris, H H; Raja, S; Saha, G B; Lüders, H O; Cook, S A; Go, R T
1996-07-01
PET is useful in the presurgical evaluation of temporal lobe epilepsy. The purpose of this retrospective study is to assess the clinical use of statistical parametric imaging in predicting surgical outcome. Interictal 18FDG-PET scans in 17 patients with surgically-treated temporal lobe epilepsy (Group A-13 seizure-free, group B = 4 not seizure-free at 6 mo) were transformed into statistical parametric imaging, with each pixel representing a z-score value by using the mean and s.d. of count distribution in each individual patient, for both visual and quantitative analysis. Mean z-scores were significantly more negative in anterolateral (AL) and mesial (M) regions on the operated side than the nonoperated side in group A (AL: p < 0.00005, M: p = 0.0097), but not in group B (AL: p = 0.46, M: p = 0.08). Statistical parametric imaging correctly lateralized 16 out of 17 patients. Only the AL region, however, was significant in predicting surgical outcome (F = 29.03, p < 0.00005). Using a cut-off z-score value of -1.5, statistical parametric imaging correctly classified 92% of temporal lobes from group A and 88% of those from Group B. The preliminary results indicate that statistical parametric imaging provides both clinically useful information for lateralization in temporal lobe epilepsy and a reliable predictive indicator of clinical outcome following surgical treatment.
Lautenschlager, Karin; Hwang, Chiachi; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Vrouwenvelder, Hans; Egli, Thomas; Hammes, Frederik
2013-06-01
Biological stability of drinking water implies that the concentration of bacterial cells and composition of the microbial community should not change during distribution. In this study, we used a multi-parametric approach that encompasses different aspects of microbial water quality including microbial growth potential, microbial abundance, and microbial community composition, to monitor biological stability in drinking water of the non-chlorinated distribution system of Zürich. Drinking water was collected directly after treatment from the reservoir and in the network at several locations with varied average hydraulic retention times (6-52 h) over a period of four months, with a single repetition two years later. Total cell concentrations (TCC) measured with flow cytometry remained remarkably stable at 9.5 (± 0.6) × 10(4) cells/ml from water in the reservoir throughout most of the distribution network, and during the whole time period. Conventional microbial methods like heterotrophic plate counts, the concentration of adenosine tri-phosphate, total organic carbon and assimilable organic carbon remained also constant. Samples taken two years apart showed more than 80% similarity for the microbial communities analysed with denaturing gradient gel electrophoresis and 454 pyrosequencing. Only the two sampling locations with the longest water retention times were the exceptions and, so far for unknown reasons, recorded a slight but significantly higher TCC (1.3 (± 0.1) × 10(5) cells/ml) compared to the other locations. This small change in microbial abundance detected by flow cytometry was also clearly observed in a shift in the microbial community profiles to a higher abundance of members from the Comamonadaceae (60% vs. 2% at other locations). Conventional microbial detection methods were not able to detect changes as observed with flow cytometric cell counts and microbial community analysis. Our findings demonstrate that the multi-parametric approach used provides a powerful and sensitive tool to assess and evaluate biological stability and microbial processes in drinking water distribution systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
The linear transformation model with frailties for the analysis of item response times.
Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A
2013-02-01
The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.
2016-01-01
Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these results, the combination of naturalistic movie stimuli and classification analysis in fMRI experiments may prove to be a sensitive tool for the assessment of changes in natural cognitive processes under experimental manipulation. PMID:27065832
Kero, Tanja; Lindsjö, Lars; Sörensen, Jens; Lubberink, Mark
2016-08-01
(11)C-PIB PET is a promising non-invasive diagnostic tool for cardiac amyloidosis. Semiautomatic analysis of PET data is now available but it is not known how accurate these methods are for amyloid imaging. The aim of this study was to evaluate the feasibility of one semiautomatic software tool for analysis and visualization of (11)C-PIB left ventricular retention index (RI) in cardiac amyloidosis. Patients with systemic amyloidosis and cardiac involvement (n = 10) and healthy controls (n = 5) were investigated with dynamic (11)C-PIB PET. Two observers analyzed the PET studies with semiautomatic software to calculate the left ventricular RI of (11)C-PIB and to create parametric images. The mean RI at 15-25 min from the semiautomatic analysis was compared with RI based on manual analysis and showed comparable values (0.056 vs 0.054 min(-1) for amyloidosis patients and 0.024 vs 0.025 min(-1) in healthy controls; P = .78) and the correlation was excellent (r = 0.98). Inter-reader reproducibility also was excellent (intraclass correlation coefficient, ICC > 0.98). Parametric polarmaps and histograms made visual separation of amyloidosis patients and healthy controls fast and simple. Accurate semiautomatic analysis of cardiac (11)C-PIB RI in amyloidosis patients is feasible. Parametric polarmaps and histograms make visual interpretation fast and simple.
A general framework for parametric survival analysis.
Crowther, Michael J; Lambert, Paul C
2014-12-30
Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.
Parametric-Studies and Data-Plotting Modules for the SOAP
NASA Technical Reports Server (NTRS)
2008-01-01
"Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.
NASA Technical Reports Server (NTRS)
Towner, Robert L.; Band, Jonathan L.
2012-01-01
An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.
Khan, Anzalee; Lindenmayer, Jean-Pierre; Opler, Mark; Yavorsky, Christian; Rothman, Brian; Lucic, Luka
2013-10-01
Debate persists with regard to how best to categorize the syndromal dimension of negative symptoms in schizophrenia. The aim was to first review published Principle Components Analysis (PCA) of the PANSS, and extract items most frequently included in the negative domain, and secondly, to examine the quality of items using Item Response Theory (IRT) to select items that best represent a measurable dimension (or dimensions) of negative symptoms. First, 22 factor analyses and PCA met were included. Second, using a large dataset (n=7187) of participants in clinical trials with chronic schizophrenia, we extracted items loading on one or more PCA. Third, items not loading with a value of ≥ 0.5, or loading on more than one component with values of ≥ 0.5 were discarded. Fourth, resulting items were included in a non-parametric IRT and retained based on Option Characteristic Curves (OCCs) and Item Characteristic Curves (ICCs). 15 items loaded on a negative domain in at least one study, with Emotional Withdrawal loading on all studies. Non-parametric IRT retained nine items as an Integrated Negative Factor: Emotional Withdrawal, Blunted Affect, Passive/Apathetic Social Withdrawal, Poor Rapport, Lack of Spontaneity/Conversation Flow, Active Social Avoidance, Disturbance of Volition, Stereotyped Thinking and Difficulty in Abstract Thinking. This is the first study to use a psychometric IRT process to arrive at a set of negative symptom items. Future steps will include further examination of these nine items in terms of their stability, sensitivity to change, and correlations with functional and cognitive outcomes. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Scradeanu, D.; Pagnejer, M.
2012-04-01
The purpose of the works is to evaluate the uncertainty of the hydrodynamic model for a multilayered geological structure, a potential trap for carbon dioxide storage. The hydrodynamic model is based on a conceptual model of the multilayered hydrostructure with three components: 1) spatial model; 2) parametric model and 3) energy model. The necessary data to achieve the three components of the conceptual model are obtained from: 240 boreholes explored by geophysical logging and seismic investigation, for the first two components, and an experimental water injection test for the last one. The hydrodinamic model is a finite difference numerical model based on a 3D stratigraphic model with nine stratigraphic units (Badenian and Oligocene) and a 3D multiparameter model (porosity, permeability, hydraulic conductivity, storage coefficient, leakage etc.). The uncertainty of the two 3D models was evaluated using multivariate geostatistical tools: a)cross-semivariogram for structural analysis, especially the study of anisotropy and b)cokriging to reduce estimation variances in a specific situation where is a cross-correlation between a variable and one or more variables that are undersampled. It has been identified important differences between univariate and bivariate anisotropy. The minimised uncertainty of the parametric model (by cokriging) was transferred to hydrodynamic model. The uncertainty distribution of the pressures generated by the water injection test has been additional filtered by the sensitivity of the numerical model. The obtained relative errors of the pressure distribution in the hydrodynamic model are 15-20%. The scientific research was performed in the frame of the European FP7 project "A multiple space and time scale approach for the quantification of deep saline formation for CO2 storage(MUSTANG)".
MANGALATHU-ARUMANA, J.; BEARDSLEY, S. A.; LIEBENTHAL, E.
2012-01-01
The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post- central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI. PMID:22377443
Oddo, Perry C; Lee, Ben S; Garner, Gregory G; Srikrishnan, Vivek; Reed, Patrick M; Forest, Chris E; Keller, Klaus
2017-09-05
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies. © 2017 Society for Risk Analysis.
Danaei, Goodarz; Robins, James M; Young, Jessica G; Hu, Frank B; Manson, JoAnn E; Hernán, Miguel A
2016-03-01
Evidence for the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss. We used data from the Nurses' Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss strategies. We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Several sensitivity analyses were conducted. The estimated 26-year risk of CHD did not change under weight loss strategies using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no weight loss ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the weight loss strategy to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes. We estimated that maintaining or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-age US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but these did not prevent us from estimating a beneficial effect of weight loss on diabetes.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
Linear and nonlinear analysis of fluid slosh dampers
NASA Astrophysics Data System (ADS)
Sayar, B. A.; Baumgarten, J. R.
1982-11-01
A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.
Supercritical nonlinear parametric dynamics of Timoshenko microbeams
NASA Astrophysics Data System (ADS)
Farokhi, Hamed; Ghayesh, Mergen H.
2018-06-01
The nonlinear supercritical parametric dynamics of a Timoshenko microbeam subject to an axial harmonic excitation force is examined theoretically, by means of different numerical techniques, and employing a high-dimensional analysis. The time-variant axial load is assumed to consist of a mean value along with harmonic fluctuations. In terms of modelling, a continuous expression for the elastic potential energy of the system is developed based on the modified couple stress theory, taking into account small-size effects; the kinetic energy of the system is also modelled as a continuous function of the displacement field. Hamilton's principle is employed to balance the energies and to obtain the continuous model of the system. Employing the Galerkin scheme along with an assumed-mode technique, the energy terms are reduced, yielding a second-order reduced-order model with finite number of degrees of freedom. A transformation is carried out to convert the second-order reduced-order model into a double-dimensional first order one. A bifurcation analysis is performed for the system in the absence of the axial load fluctuations. Moreover, a mean value for the axial load is selected in the supercritical range, and the principal parametric resonant response, due to the time-variant component of the axial load, is obtained - as opposed to transversely excited systems, for parametrically excited system (such as our problem here), the nonlinear resonance occurs in the vicinity of twice any natural frequency of the linear system; this is accomplished via use of the pseudo-arclength continuation technique, a direct time integration, an eigenvalue analysis, and the Floquet theory for stability. The natural frequencies of the system prior to and beyond buckling are also determined. Moreover, the effect of different system parameters on the nonlinear supercritical parametric dynamics of the system is analysed, with special consideration to the effect of the length-scale parameter.
Kim, Da-Eun; Yang, Hyeri; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Choi, Jin Kyu; Jung, Mi-Sook; Jeon, Eun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Park, Jung Eun; Sohn, Soo Jung; Kim, Tae Sung; Ahn, Il Young; Jeong, Tae-Cheon; Lim, Kyung-Min; Bae, SeungJin
2016-01-01
In order for a novel test method to be applied for regulatory purposes, its reliability and relevance, i.e., reproducibility and predictive capacity, must be demonstrated. Here, we examine the predictive capacity of a novel non-radioisotopic local lymph node assay, LLNA:BrdU-FCM (5-bromo-2'-deoxyuridine-flow cytometry), with a cutoff approach and inferential statistics as a prediction model. 22 reference substances in OECD TG429 were tested with a concurrent positive control, hexylcinnamaldehyde 25%(PC), and the stimulation index (SI) representing the fold increase in lymph node cells over the vehicle control was obtained. The optimal cutoff SI (2.7≤cutoff <3.5), with respect to predictive capacity, was obtained by a receiver operating characteristic curve, which produced 90.9% accuracy for the 22 substances. To address the inter-test variability in responsiveness, SI values standardized with PC were employed to obtain the optimal percentage cutoff (42.6≤cutoff <57.3% of PC), which produced 86.4% accuracy. A test substance may be diagnosed as a sensitizer if a statistically significant increase in SI is elicited. The parametric one-sided t-test and non-parametric Wilcoxon rank-sum test produced 77.3% accuracy. Similarly, a test substance could be defined as a sensitizer if the SI means of the vehicle control, and of the low, middle, and high concentrations were statistically significantly different, which was tested using ANOVA or Kruskal-Wallis, with post hoc analysis, Dunnett, or DSCF (Dwass-Steel-Critchlow-Fligner), respectively, depending on the equal variance test, producing 81.8% accuracy. The absolute SI-based cutoff approach produced the best predictive capacity, however the discordant decisions between prediction models need to be examined further. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baudrenghien, P.; Mastoridis, T.
2017-01-01
The interaction between beam dynamics and the radio frequency (rf) station in circular colliders is complex and can lead to longitudinal coupled-bunch instabilities at high beam currents. The excitation of the cavity higher order modes is traditionally damped using passive devices. But the wakefield developed at the cavity fundamental frequency falls in the frequency range of the rf power system and can, in theory, be compensated by modulating the generator drive. Such a regulation is the responsibility of the low-level rf (llrf) system that measures the cavity field (or beam current) and generates the rf power drive. The Large Hadron Collider (LHC) rf was designed for the nominal LHC parameter of 0.55 A DC beam current. At 7 TeV the synchrotron radiation damping time is 13 hours. Damping of the instability growth rates due to the cavity fundamental (400.789 MHz) can only come from the synchrotron tune spread (Landau damping) and will be very small (time constant in the order of 0.1 s). In this work, the ability of the present llrf compensation to prevent coupled-bunch instabilities with the planned high luminosity LHC (HiLumi LHC) doubling of the beam current to 1.1 A DC is investigated. The paper conclusions are based on the measured performances of the present llrf system. Models of the rf and llrf systems were developed at the LHC start-up. Following comparisons with measurements, the system was parametrized using these models. The parametric model then provides a more realistic estimation of the instability growth rates than an ideal model of the rf blocks. With this modeling approach, the key rf settings can be varied around their set value allowing for a sensitivity analysis (growth rate sensitivity to rf and llrf parameters). Finally, preliminary measurements from the LHC at 0.44 A DC are presented to support the conclusions of this work.
Three-dimensional MRI perfusion maps: a step beyond volumetric analysis in mental disorders
Fabene, Paolo F; Farace, Paolo; Brambilla, Paolo; Andreone, Nicola; Cerini, Roberto; Pelizza, Luisa; Versace, Amelia; Rambaldelli, Gianluca; Birbaumer, Niels; Tansella, Michele; Sbarbati, Andrea
2007-01-01
A new type of magnetic resonance imaging analysis, based on fusion of three-dimensional reconstructions of time-to-peak parametric maps and high-resolution T1-weighted images, is proposed in order to evaluate the perfusion of selected volumes of interest. Because in recent years a wealth of data have suggested the crucial involvement of vascular alterations in mental diseases, we tested our new method on a restricted sample of schizophrenic patients and matched healthy controls. The perfusion of the whole brain was compared with that of the caudate nucleus by means of intrasubject analysis. As expected, owing to the encephalic vascular pattern, a significantly lower time-to-peak was observed in the caudate nucleus than in the whole brain in all healthy controls, indicating that the suggested method has enough sensitivity to detect subtle perfusion changes even in small volumes of interest. Interestingly, a less uniform pattern was observed in the schizophrenic patients. The latter finding needs to be replicated in an adequate number of subjects. In summary, the three-dimensional analysis method we propose has been shown to be a feasible tool for revealing subtle vascular changes both in normal subjects and in pathological conditions. PMID:17229290
Analysis of Lateral Rail Restraint.
DOT National Transportation Integrated Search
1983-09-01
This report deals with the analysis of lateral rail strength using the results of experimental investigations and a nonlinear rail response model. Part of the analysis involves the parametric study of the influence of track parameters on lateral rail...
Parametrically excited oscillation of stay cable and its control in cable-stayed bridges.
Sun, Bing-nan; Wang, Zhi-gang; Ko, J M; Ni, Y Q
2003-01-01
This paper presents a nonlinear dynamic model for simulation and analysis of a kind of parametrically excited vibration of stay cable caused by support motion in cable-stayed bridges. The sag, inclination angle of the stay cable are considered in the model, based on which, the oscillation mechanism and dynamic response characteristics of this kind of vibration are analyzed through numerical calculation. It is noted that parametrically excited oscillation of a stay cable with certain sag, inclination angle and initial static tension force may occur in cable-stayed bridges due to deck vibration under the condition that the natural frequency of a cable approaches to about half of the first model frequency of the bridge deck system. A new vibration control system installed on the cable anchorage is proposed as a possible damping system to suppress the cable parametric oscillation. The numerical calculation results showed that with the use of this damping system, the cable oscillation due to the vibration of the deck and/or towers will be considerably reduced.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
NASA Technical Reports Server (NTRS)
1975-01-01
Transportation mass requirements are developed for various mission and transportation modes based on vehicle systems sized to fit the exact needs of each mission. The parametric data used to derive the mass requirements for each mission and transportation mode are presented to enable accommodation of possible changes in mode options or payload definitions. The vehicle sizing and functional requirements used to derive the parametric data are described.
Parametrically excited multidegree-of-freedom systems with repeated frequencies
NASA Astrophysics Data System (ADS)
Nayfeh, A. H.
1983-05-01
An analysis is presented of the linear response of multidegree-of-freedom systems with a repeated frequency of order three to a harmonic parametric excitation. The method of multiple scales is used to determine the modulation of the amplitudes and phases for two cases: fundamental resonance of the modes with the repeated frequency and combination resonance involving these modes and another mode. Conditions are then derived for determining the stability of the motion.
Advanced oxygen-hydrocarbon rocket engine study
NASA Technical Reports Server (NTRS)
Obrien, C. J.; Salkeld, R.
1980-01-01
The advantages and disadvantages, system performance and operating limits, engine parametric data, and technology requirements for candidate high pressure LO2/Hydrocarbon engine systems are summarized. These summaries of parametric analysis and design provide a consistent engine system data base. Power balance data were generated for the eleven engine cycles. Engine cycle rating parameters were established and the desired condition and the effect of the parameter on the engine and/or vehicle are described.
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions
Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.
Engineering Novel Detectors and Sensors for MRI
Qian, Chunqi; Zabow, Gary; Koretsky, Alan
2013-01-01
Increasing detection sensitivity and image contrast have always been major topics of research in MRI. In this perspective, we summarize two engineering approaches to make detectors and sensors that have potential to extend the capability of MRI. The first approach is to integrate miniaturized detectors with a wireless powered parametric amplifier to enhance the detection sensitivity of remotely coupled detectors. The second approach is to microfabricate contrast agents with encoded multispectral frequency shifts, whose properties can be specified and fine-tuned by geometry. These two complementary approaches will benefit from the rapid development in nanotechnology and microfabrication which should enable new opportunities for MRI. PMID:23245489
NASA Astrophysics Data System (ADS)
Zhen, Xing-wei; Huang, Yi
2017-10-01
This study focuses on a new technology of Subsurface Tension Leg Platform (STLP), which utilizes the shallowwater rated well completion equipment and technology for the development of large oil and gas fields in ultra-deep water (UDW). Thus, the STLP concept offers attractive advantages over conventional field development concepts. STLP is basically a pre-installed Subsurface Sea-star Platform (SSP), which supports rigid risers and shallow-water rated well completion equipment. The paper details the results of the parametric study on the behavior of STLP at a water depth of 3000 m. At first, a general description of the STLP configuration and working principle is introduced. Then, the numerical models for the global analysis of the STLP in waves and current are presented. After that, extensive parametric studies are carried out with regarding to SSP/tethers system analysis, global dynamic analysis and riser interference analysis. Critical points are addressed on the mooring pattern and riser arrangement under the influence of ocean current, to ensure that the requirements on SSP stability and riser interference are well satisfied. Finally, conclusions and discussions are made. The results indicate that STLP is a competitive well and riser solution in up to 3000 m water depth for offshore petroleum production.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1996-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
Parametric interactions in presence of different size colloids in semiconductor quantum plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanshpal, R., E-mail: ravivanshpal@gmail.com; Sharma, Uttam; Dubey, Swati
2015-07-31
Present work is an attempt to investigate the effect of different size colloids on parametric interaction in semiconductor quantum plasma. Inclusion of quantum effect is being done in this analysis through quantum correction term in classical hydrodynamic model of homogeneous semiconductor plasma. The effect is associated with purely quantum origin using quantum Bohm potential and quantum statistics. Colloidal size and quantum correction term modify the parametric dispersion characteristics of ion implanted semiconductor plasma medium. It is found that quantum effect on colloids is inversely proportional to their size. Moreover critical size of implanted colloids for the effective quantum correction ismore » determined which is found to be equal to the lattice spacing of the crystal.« less
Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach
NASA Astrophysics Data System (ADS)
Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.
2017-07-01
The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.
Propulsion Study for Small Transport Aircraft Technology (STAT)
NASA Technical Reports Server (NTRS)
Gill, J. C.; Earle, R. V.; Staton, D. V.; Stolp, P. C.; Huelster, D. S.; Zolezzi, B. A.
1980-01-01
Propulsion requirements were determined for 0.5 and 0.7 Mach aircraft. Sensitivity studies were conducted on both these aircraft to determine parametrically the influence of propulsion characteristics on aircraft size and direct operating cost (DOC). Candidate technology elements and design features were identified and parametric studies conducted to select the STAT advanced engine cycle. Trade off studies were conducted to determine those advanced technologies and design features that would offer a reduction in DOC for operation of the STAT engines. These features were incorporated in the two STAT engines. A benefit assessment was conducted comparing the STAT engines to current technology engines of the same power and to 1985 derivatives of the current technology engines. Research and development programs were recommended as part of an overall technology development plan to ensure that full commercial development of the STAT engines could be initiated in 1988.