Sample records for sensitivity analysis distributed

  1. Probabilistic Sensitivity Analysis with Respect to Bounds of Truncated Distributions (PREPRINT)

    DTIC Science & Technology

    2010-04-01

    AFRL-RX-WP-TP-2010-4147 PROBABILISTIC SENSITIVITY ANALYSIS WITH RESPECT TO BOUNDS OF TRUNCATED DISTRIBUTIONS (PREPRINT) H. Millwater and...5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62102F 6. AUTHOR(S) H. Millwater and Y. Feng 5d. PROJECT...Z39-18 1 Probabilistic Sensitivity Analysis with respect to Bounds of Truncated Distributions H. Millwater and Y. Feng Department of Mechanical

  2. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  3. Testing of stack-unit/aquifer sensitivity analysis using contaminant plume distribution in the subsurface of Savannah River Site, South Carolina, USA

    USGS Publications Warehouse

    Rine, J.M.; Shafer, J.M.; Covington, E.; Berg, R.C.

    2006-01-01

    Published information on the correlation and field-testing of the technique of stack-unit/aquifer sensitivity mapping with documented subsurface contaminant plumes is rare. The inherent characteristic of stack-unit mapping, which makes it a superior technique to other analyses that amalgamate data, is the ability to deconstruct the sensitivity analysis on a unit-by-unit basis. An aquifer sensitivity map, delineating the relative sensitivity of the Crouch Branch aquifer of the Administrative/Manufacturing Area (A/M) at the Savannah River Site (SRS) in South Carolina, USA, incorporates six hydrostratigraphic units, surface soil units, and relevant hydrologic data. When this sensitivity map is compared with the distribution of the contaminant tetrachloroethylene (PCE), PCE is present within the Crouch Branch aquifer within an area classified as highly sensitive, even though the PCE was primarily released on the ground surface within areas classified with low aquifer sensitivity. This phenomenon is explained through analysis of the aquifer sensitivity map, the groundwater potentiometric surface maps, and the plume distributions within the area on a unit-by- unit basis. The results of this correlation show how the paths of the PCE plume are influenced by both the geology and the groundwater flow. ?? Springer-Verlag 2006.

  4. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  5. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  6. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  7. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  8. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  9. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  10. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  11. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  12. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  13. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  14. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  16. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  17. Analysis of the NAEG model of transuranic radionuclide transport and dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercher, J.R.; Anspaugh, L.R.

    We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less

  18. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  19. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  20. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  1. Elemental Analysis in Biological Matrices Using ICP-MS.

    PubMed

    Hansen, Matthew N; Clogston, Jeffrey D

    2018-01-01

    The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.

  2. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  3. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  4. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  5. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability

    EPA Pesticide Factsheets

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  6. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  7. Using solute and heat tracers for aquifer characterization in a strongly heterogeneous alluvial aquifer

    NASA Astrophysics Data System (ADS)

    Sarris, Theo S.; Close, Murray; Abraham, Phillip

    2018-03-01

    A test using Rhodamine WT and heat as tracers, conducted over a 78 day period in a strongly heterogeneous alluvial aquifer, was used to evaluate the utility of the combined observation dataset for aquifer characterization. A highly parameterized model was inverted, with concentration and temperature time-series as calibration targets. Groundwater heads recorded during the experiment were boundary dependent and were ignored during the inversion process. The inverted model produced a high resolution depiction of the hydraulic conductivity and porosity fields. Statistical properties of these fields are in very good agreement with estimates from previous studies at the site. Spatially distributed sensitivity analysis suggests that both solute and heat transport were most sensitive to the hydraulic conductivity and porosity fields and less sensitive to dispersivity and thermal distribution factor, with sensitivity to porosity greatly reducing outside the monitored area. The issues of model over-parameterization and non-uniqueness are addressed through identifiability analysis. Longitudinal dispersivity and thermal distribution factor are highly identifiable, however spatially distributed parameters are only identifiable near the injection point. Temperature related density effects became observable for both heat and solute, as the temperature anomaly increased above 12 degrees centigrade, and affected down gradient propagation. Finally we demonstrate that high frequency and spatially dense temperature data cannot inform a dual porosity model in the absence of frequent solute concentration measurements.

  8. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  9. Wide Area Recovery and Resiliency Program (WARRP) Biological Attack Response and Recovery: End to End Medical Countermeasure Distribution and Dispensing Processes

    DTIC Science & Technology

    2012-04-24

    compliance (Figure 3). This sensitivity analysis shows that public compliance is likely the most important consideration in saving lives. If the... public complies, medication effectiveness and POD throughput are the next two most important factors. Figure 3. Sensitivity Analysis ...government decision makers to the public . The number of gaps identified in this analysis is overwhelming, and improving the outcomes of the end-to

  10. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability: SSD Plot Diagrams

    EPA Pesticide Factsheets

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  11. Analysis of electrical tomography sensitive field based on multi-terminal network and electric field

    NASA Astrophysics Data System (ADS)

    He, Yongbo; Su, Xingguo; Xu, Meng; Wang, Huaxiang

    2010-08-01

    Electrical tomography (ET) aims at the study of the conductivity/permittivity distribution of the interested field non-intrusively via the boundary voltage/current. The sensor is usually regarded as an electric field, and finite element method (FEM) is commonly used to calculate the sensitivity matrix and to optimize the sensor architecture. However, only the lumped circuit parameters can be measured by the data acquisition electronics, it's very meaningful to treat the sensor as a multi terminal network. Two types of multi terminal network with common node and common loop topologies are introduced. Getting more independent measurements and making more uniform current distribution are the two main ways to minimize the inherent ill-posed effect. By exploring the relationships of network matrixes, a general formula is proposed for the first time to calculate the number of the independent measurements. Additionally, the sensitivity distribution is analyzed with FEM. As a result, quasi opposite mode, an optimal single source excitation mode, that has the advantages of more uniform sensitivity distribution and more independent measurements, is proposed.

  12. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  13. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  14. Constraining the Sea Quark Distributions Through W+/- Cross Section Ratio Measurements at STAR

    NASA Astrophysics Data System (ADS)

    Posik, Matthew; STAR Collaboration

    2017-09-01

    Over the years, extractions of parton distribution functions (PDFs) have become more precise, however there are still regions where more data are needed to improve constraints. One such distribution is the sea quark distribution near the valence region, in particular the d / u distribution. Currently, measurements in the high-x region still have large uncertainties and suggest different trends for this distribution. The charged W cross section ratio is sensitive to the unpolarized sea quark distributions and could be used to help constrain the d / u distribution. Through pp collisions, the STAR experiment at RHIC, is well equipped to measure the e+/- leptonic decays of W+/- bosons in the mid-rapidity range | η | <= 1 at √{ s} = 500/510 GeV. At these kinematics STAR is sensitive to quark distributions near Bjorken-x of 0.16. STAR can also extend the sea quark sensitivity to higher x by measuring the leptonic decays in the forward rapidity range 1.1 < η < 2.0. STAR runs from 2011 through 2013 have collected about 350 pb-1 of data. Presented here are preliminary results for the 2011-2012 W cross section ratios ( 100 pb-1), and an update on the 2013 W cross section analysis ( 250 pb-1).

  15. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  16. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  17. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  18. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  19. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  20. Application and Optimization of Stiffness Abruption Structures for Pressure Sensors with High Sensitivity and Anti-Overload Ability

    PubMed Central

    Xu, Tingzhong; Lu, Dejiang; Zhao, Libo; Jiang, Zhuangde; Wang, Hongyan; Guo, Xin; Li, Zhikang; Zhou, Xiangyang; Zhao, Yulong

    2017-01-01

    The influence of diaphragm bending stiffness distribution on the stress concentration characteristics of a pressure sensing chip had been analyzed and discussed systematically. According to the analysis, a novel peninsula-island-based diaphragm structure was presented and applied to two differenet diaphragm shapes as sensing chips for pressure sensors. By well-designed bending stiffness distribution of the diaphragm, the elastic potential energy induced by diaphragm deformation was concentrated above the gap position, which remarkably increased the sensitivity of the sensing chip. An optimization method and the distribution pattern of the peninsula-island based diaphragm structure were also discussed. Two kinds of sensing chips combined with the peninsula-island structures distributing along the side edge and diagonal directions of rectangular diaphragm were fabricated and analyzed. By bonding the sensing chips with anti-overload glass bases, these two sensing chips were demonstrated by testing to achieve not only high sensitivity, but also good anti-overload ability. The experimental results showed that the proposed structures had the potential to measure ultra-low absolute pressures with high sensitivity and good anti-overload ability in an atmospheric environment. PMID:28846599

  1. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  2. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    ERIC Educational Resources Information Center

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  3. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  4. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    PubMed

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  5. Sensitivity Analysis of Flux Determination in Heart by H2 18O -provided Labeling Using a Dynamic Isotopologue Model of Energy Transfer Pathways

    PubMed Central

    Schryer, David W.; Peterson, Pearu; Illaste, Ardo; Vendelin, Marko

    2012-01-01

    To characterize intracellular energy transfer in the heart, two organ-level methods have frequently been employed: inversion and saturation transfer, and dynamic labeling. Creatine kinase (CK) fluxes obtained by following oxygen labeling have been considerably smaller than the fluxes determined by saturation transfer. It has been proposed that dynamic labeling determines net flux through CK shuttle, whereas saturation transfer measures total unidirectional flux. However, to our knowledge, no sensitivity analysis of flux determination by oxygen labeling has been performed, limiting our ability to compare flux distributions predicted by different methods. Here we analyze oxygen labeling in a physiological heart phosphotransfer network with active CK and adenylate kinase (AdK) shuttles and establish which fluxes determine the labeling state. A mathematical model consisting of a system of ordinary differential equations was composed describing enrichment in each phosphoryl group and inorganic phosphate. By varying flux distributions in the model and calculating the labeling, we analyzed labeling sensitivity to different fluxes in the heart. We observed that the labeling state is predominantly sensitive to total unidirectional CK and AdK fluxes and not to net fluxes. We conclude that measuring dynamic incorporation of into the high-energy phosphotransfer network in heart does not permit unambiguous determination of energetic fluxes with a higher magnitude than the ATP synthase rate when the bidirectionality of fluxes is taken into account. Our analysis suggests that the flux distributions obtained using dynamic labeling, after removing the net flux assumption, are comparable with those from inversion and saturation transfer. PMID:23236266

  6. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  7. Distribution Analysis of Anthocyanins, Sugars, and Organic Acids in Strawberry Fruits Using Matrix-Assisted Laser Desorption/Ionization-Imaging Mass Spectrometry.

    PubMed

    Enomoto, Hirofumi; Sato, Kei; Miyamoto, Koji; Ohtsuka, Akira; Yamane, Hisakazu

    2018-05-16

    Anthocyanins, sugars, and organic acids contribute to the appearance, health benefits, and taste of strawberries. However, their spatial distribution in the ripe fruit has been fully unrevealed. Therefore, we performed matrix-assisted laser desorption/ionization, MALDI-IMS, analysis to investigate their spatial distribution in ripe strawberries. The detection sensitivity was improved by using the TM-Sprayer for matrix application. In the receptacle, pelargonidins were distributed in the skin, cortical, and pith tissues, whereas cyanidins and delphinidins were slightly localized in the skin. In the achene, mainly cyanidins were localized in the outside of the skin. Citric acid was mainly distributed in the upper and bottom side of cortical tissue. Although hexose was distributed almost equally throughout the fruits, sucrose was mainly distributed in the upper side of cortical and pith tissues. These results suggest that using the TM-Sprayer in MALDI-IMS was useful for microscopic distribution analysis of anthocyanins, sugars, and organic acids in strawberries.

  8. Sensitivity of allowable cuts to intensive management.

    Treesearch

    Roger D. Fight; Dennis L. Schweitzer

    1974-01-01

    A sensitivity analysis of allowable cuts on two BLM master units shows that even-flow allowable cuts depend primarily on: (1) assumed long-term growth potential, (2) period that growth increases must be cumulated before they can be removed from the stands on which they occur, and (3) amount and age-class distribution of the initial inventory. Current allowable cut...

  9. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  10. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  11. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  12. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  13. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  14. Mathematical 3D modelling and sensitivity analysis of multipolar radiofrequency ablation in the spine.

    PubMed

    Matschek, Janine; Bullinger, Eric; von Haeseler, Friedrich; Skalej, Martin; Findeisen, Rolf

    2017-02-01

    Radiofrequency ablation is a valuable tool in the treatment of many diseases, especially cancer. However, controlled heating up to apoptosis of the desired target tissue in complex situations, e.g. in the spine, is challenging and requires experienced interventionalists. For such challenging situations a mathematical model of radiofrequency ablation allows to understand, improve and optimise the outcome of the medical therapy. The main contribution of this work is the derivation of a tailored, yet expandable mathematical model, for the simulation, analysis, planning and control of radiofrequency ablation in complex situations. The dynamic model consists of partial differential equations that describe the potential and temperature distribution during intervention. To account for multipolar operation, time-dependent boundary conditions are introduced. Spatially distributed parameters, like tissue conductivity and blood perfusion, allow to describe the complex 3D environment representing diverse involved tissue types in the spine. To identify the key parameters affecting the prediction quality of the model, the influence of the parameters on the temperature distribution is investigated via a sensitivity analysis. Simulations underpin the quality of the derived model and the analysis approach. The proposed modelling and analysis schemes set the basis for intervention planning, state- and parameter estimation, and control. Copyright © 2016. Published by Elsevier Inc.

  15. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  16. Synthesis of MSnO{sub 3} (M = Ba, Sr) nanoparticles by reverse micelle method and particle size distribution analysis by whole powder pattern modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Jahangeer; Blakely, Colin K.; Bruno, Shaun R.

    2012-09-15

    Highlights: ► BaSnO{sub 3} and SrSnO{sub 3} nanoparticles synthesized using the reverse micelle method. ► Particle size and size distribution studied by whole powder pattern modeling. ► Nanoparticles are of optimal size for investigation in dye-sensitized solar cells. -- Abstract: Light-to-electricity conversion efficiency in dye-sensitized solar cells critically depends not only on the dye molecule, semiconducting material and redox shuttle selection but also on the particle size and particle size distribution of the semiconducting photoanode. In this study, nanocrystalline BaSnO{sub 3} and SrSnO{sub 3} particles have been synthesized using the microemulsion method. Particle size distribution was studied by whole powdermore » pattern modeling which confirmed narrow particle size distribution with an average size of 18.4 ± 8.3 nm for SrSnO{sub 3} and 15.8 ± 4.2 nm for BaSnO{sub 3}. These values are in close agreement with results of transmission electron microscopy. The prepared materials have optimal microstructure for successive investigation in dye-sensitized solar cells.« less

  17. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  18. Fast calculation of the sensitivity matrix in magnetic induction tomography by tetrahedral edge finite elements and the reciprocity theorem.

    PubMed

    Hollaus, K; Magele, C; Merwa, R; Scharfetter, H

    2004-02-01

    Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.

  19. [A cross-racial analysis on the susceptible gene polymorphisms of salt-sensitive hypertension].

    PubMed

    Lu, Jia-peng; Zhang, Ling; Wang, Wei

    2010-10-01

    To compare the genetic distributions of salt-sensitivity of four ethnic populations in Hapmap database. The frequencies data (395 subjects) of salt-sensitivity polymorphisms (AGT/M235T, ACE/ID, CYP11B2/C-344T, ADDI/Gly460Trp, GNB3/C825 and CYP3A5/A6986G)of Utah residents with ancestry from northern and western Europe (CEU), Han Chinese in Beijing (CHB), Japanese in Tokyo (JPT) and Yoruba mother-father-child trios in Ibadan, Nigeria (YRI) were obtained from International HapMap Project. The good-fit χ(2) test was performed to test whether the frequencies of each genotype reached Hardy-Weinberg equilibrium. The differences of the genotype and allele distribution and trend analysis were detected via χ(2) test. Furthermore, multiple comparisons between two populations were analyzed by Lancaster's partition of chi-squares. There were significant differences of each genotype distribution among four ethnic populations (P < 0.05). The distribution of genotype frequencies and susceptible allele frequencies of salt sensitive candidate genes were similar between CHB and JPT. Excepted for GNB3/825T allele (38.8% vs.34.4%, P = 0.521), susceptible allele frequencies in AGT/235T (79.2% vs. 41.2%, P < 0.001), ACE/I (56.5% vs. 43.5%, P < 0.001), CYP11B2/-344T (74.1% vs. 56.7%, P = 0.001), ADDI/460Trp (51.8% vs. 20.4%, P < 0.001) and CYP3A5/A6986 (30.1% vs. 3.6%, P < 0.001) were significantly higher in CHB than in CEU. There distribution of ADDI/460Trp allele was significant lower in YRI (4%) than in CHB (51.8%, P < 0.001). However frequencies of AGT/235T, CYP11B2/-334T, GNB3/825T and CYP3A5/6986A in CHB were significantly lower than those in YRI (P < 0.05). Trend analyses showed significantly increased trend in AGT/235T (41.2% < 79.2% < 92.0%, P < 0.001), CYP11B2/-334T (56.7% < 74.1% < 84.8%, P < 0.001) and CYP3A5/6986A (3.6% < 30.1% < 84.5%, P < 0.001) in CEU, CHB and YRI. There are significant discrepancy of salt-sensitivity variant distributions among four ethnic populations in Hapmap database. The frequencies of the susceptible polymorphisms related to salt-sensitivity in Beijing Han population was similar with JPT, higher than in CEU but lower than in YRI, suggesting high salt-sensitive and risk for hypertension in Beijing Han population. Prevention and individual therapy for high-risk population will help to reduce the prevalence of salt-sensitive hypertension and cardiovascular diseases.

  20. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  1. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  2. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  3. Experimental evaluation of the sensitivity to fuel utilization and air management on a 100 kW SOFC system

    NASA Astrophysics Data System (ADS)

    Santarelli, M.; Leone, P.; Calì, M.; Orsello, G.

    The tubular SOFC generator CHP-100, built by Siemens Power Generation (SPG) Stationary Fuel Cells (SFC), is running at the Gas Turbine Technologies (GTT) in Torino (Italy), in the framework of the EOS Project. The nominal load of the generator ensures a produced electric power of around 105 kW e ac and around 60 kW t of thermal power at 250 °C to be used for the custom tailored HVAC system. Several experimental sessions have been scheduled on the generator; the aim is to characterize the operation through the analysis of some global performance index and the detailed control of the operation of the different bundles of the whole stack. All the scheduled tests have been performed by applying the methodology of design of experiment; the main obtained results show the effect of the change of the analysed operating factors in terms of distribution of voltage and temperature over the stack. Fuel consumption tests give information about the sensitivity of the voltage and temperature distribution along the single bundles. On the other hand, since the generator is an air cooled system, the results of the tests on the air stoichs have been used to analyze the generator thermal management (temperature distribution and profiles) and its effect on the polarization. The sensitivity analysis of the local voltage to the overall fuel consumption modifications can be used as a powerful procedure to deduce the local distribution of fuel utilization (FU) along the single bundles: in fact, through a model obtained by deriving the polarization curve respect to FU, it is possible to link the distribution of voltage sensitivities to FC to the distribution of the local FU. The FU distribution will be shown as non-uniform, and this affects the local voltage and temperatures, causing a high warming effect in some rows of the generator. Therefore, a discussion around the effectiveness of the thermal regulation made by the air stoichs, in order to reduce the non-uniform distribution of temperature and the overheating (increasing therefore the voltage behavior along the generator) has been performed. It is demonstrated that the utilization of one air plenum is not effective in the thermal regulation of the whole generator, in particular in the reduction of the temperature gradients linked to the non-uniform fuel distribution.

  4. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  5. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  6. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  7. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    PubMed

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  8. Sensitivity analysis of urban flood flows to hydraulic controls

    NASA Astrophysics Data System (ADS)

    Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah

    2017-04-01

    Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.

  9. Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution

    NASA Astrophysics Data System (ADS)

    Álvarez, Ezequiel

    2012-08-01

    We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.

  10. Development of a highly sensitive and specific ELISA method for the determination of l-corydalmine in SD rats with monoclonal antibody.

    PubMed

    Zhang, Hongwei; Gao, Lan; Shu, Menglin; Liu, Jihua; Yu, Boyang

    2018-01-15

    l-Corydalmine (l-CDL) is a potent analgesic constituent of the traditional Chinese medicine, Rhizoma Corydalis. However, the pharmacokinetic process and tissue distribution of l-CDL in vivo are still unknown. Therefore, it is necessary to establish a simple and sensitive method to detect l-CDL, which will be helpful to study its distribution and pharmacokinetic process. To determine this compound in biological samples, a monoclonal antibody (mAb) against l-CDL was produced and a fast and highly sensitive indirect competitive enzyme-linked immunosorbent assay (icELISA) was developed in this study. The icELISA was applied to determine l-CDL in biological samples. The limit of detection (LOD) of the method was 0.015 ng/mL with a liner range of 1-1000 ng/mL (R 2  = 0.9912). The intra- and inter-day precision were below 15% and the recoveries were within 80-117%. Finally, the developed immunoassay was successfully applied to the analysis of the distribution of l-CDL in SD rats. In conclusion, the icELISA based on the anti-l-CDL mAb could be considered as a highly sensitive and rapid method for the determination of l-CDL in biological samples. The ELISA approach may provide a valuable tool for the analysis of small molecules in biological samples. Copyright © 2017. Published by Elsevier B.V.

  11. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  12. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  13. Effects of habitat map generalization in biodiversity assessment

    NASA Technical Reports Server (NTRS)

    Stoms, David M.

    1992-01-01

    Species richness is being mapped as part of an inventory of biological diversity in California (i.e., gap analysis). Species distributions are modeled with a GIS on the basis of maps of each species' preferred habitats. Species richness is then tallied in equal-area sampling units. A GIS sensitivity analysis examined the effects of the level of generalization of the habitat map on the predicted distribution of species richness in the southern Sierra Nevada. As the habitat map was generalized, the number of habitat types mapped within grid cells tended to decrease with a corresponding decline in numbers of species predicted. Further, the ranking of grid cells in order of predicted numbers of species changed dramatically between levels of generalization. Areas predicted to be of greatest conservation value on the basis of species richness may therefore be sensitive to GIS data resolution.

  14. An Earthquake Source Sensitivity Analysis for Tsunami Propagation in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Necmioglu, Ocal; Meral Ozel, Nurcan

    2013-04-01

    An earthquake source parameter sensitivity analysis for tsunami propagation in the Eastern Mediterranean has been performed based on 8 August 1303 Crete and Dodecanese Islands earthquake resulting in destructive inundation in the Eastern Mediterranean. The analysis involves 23 cases describing different sets of strike, dip, rake and focal depth, while keeping the fault area and displacement, thus the magnitude, same. The main conclusions of the evaluation are drawn from the investigation of the wave height distributions at Tsunami Forecast Points (TFP). The earthquake vs. initial tsunami source parameters comparison indicated that the maximum initial wave height values correspond in general to the changes in rake angle. No clear depth dependency is observed within the depth range considered and no strike angle dependency is observed in terms of amplitude change. Directivity sensitivity analysis indicated that for the same strike and dip, 180° shift in rake may lead to 20% change in the calculated tsunami wave height. Moreover, an approximately 10 min difference in the arrival time of the initial wave has been observed. These differences are, however, greatly reduced in the far field. The dip sensitivity analysis, performed separately for thrust and normal faulting, has both indicated that an increase in the dip angle results in the decrease of the tsunami wave amplitude in the near field approximately 40%. While a positive phase shift is observed, the period and the shape of the initial wave stays nearly the same for all dip angles at respective TFPs. These affects are, however, not observed at the far field. The resolution of the bathymetry, on the other hand, is a limiting factor for further evaluation. Four different cases were considered for the depth sensitivity indicating that within the depth ranges considered (15-60 km), the increase of the depth has only a smoothing effect on the synthetic tsunami wave height measurements at the selected TFPs. The strike sensitivity analysis showed clear phase shift with respect to the variation of the strike angles, without leading to severe variation of the initial and maximum waves at locations considered. Travel time maps for two cases corresponding to difference in the strike value (60° vs 150°) presented a more complex wave propagation for the case with 60° strike angle due to the fact that the normal of the fault plane is orthogonal to the main bathymetric structure in the region, namely the Eastern section of the Hellenic Arc between Crete and Rhodes Islands. For a given set of strike, dip and focal depth parameters, the effect of the variation in the rake angle has been evaluated in the rake sensitivity analysis. A waveform envelope composed of symmetric synthetic recordings at one TFPs could be clearly observed as a result of rake angle variations in 0-180° range. This could also lead to the conclusion that for a given magnitude (fault size and displacement), the expected maximum and minimum tsunami wave amplitudes could be evaluated as a waveform envelope rather limited to a single point of time or amplitude. The Evaluation of the initial wave arrival times follows an expected pattern controlled by the distance, wheras maximum wave arrival time distribution presents no clear pattern. Nevertheless, the distribution is rather concentrated in time domain for some TFPs. Maximum positive and minimum negative wave amplitude distributions indicates a broader range for a subgroup of TFPs, wheras for the remaining TFPs the distributions are narrow. Any deviation from the expected trend of calculating narrower ranges of amplitude distributions could be interpreted as the result o the bathymetry and focusing effects. As similar studies conducted in the different parts of the globe indicated, the main characteristics of the tsunami propagation are unique for each basin. It should be noted, however, that the synthetic measurements obtained at the TFPs in the absence of high-resolution bathymetric data, should be considered only an overall guidance. The results indicate the importance of the accuracy of earthquake source parameters for reliable tsunami predictions and the need for high-resolution bathymetric data to be able to perform calculations with higher accuracy. On the other hand, this study did not address other parameters, such as heterogeneous slip distribution and rupture duration, which affect the tsunami initiation and propagation process.

  15. Quantitative method to determine the regional drinking water odorant regulation goals based on odor sensitivity distribution: illustrated using 2-MIB.

    PubMed

    Yu, Jianwei; An, Wei; Cao, Nan; Yang, Min; Gu, Junong; Zhang, Dong; Lu, Ning

    2014-07-01

    Taste and odor (T/O) in drinking water often cause consumer complaints and are thus regulated in many countries. However, people in different regions may exhibit different sensitivities toward T/O. This study proposed a method to determine the regional drinking water odorant regulation goals (ORGs) based on the odor sensitivity distribution of the local population. The distribution of odor sensitivity to 2-methylisoborneol (2-MIB) by the local population in Beijing, China was revealed by using a normal distribution function/model to describe the odor complaint response to a 2-MIB episode in 2005, and a 2-MIB concentration of 12.9 ng/L and FPA (flavor profile analysis) intensity of 2.5 was found to be the critical point to cause odor complaints. Thus the Beijing ORG for 2-MIB was determined to be 12.9 ng/L. Based on the assumption that the local FPA panel can represent the local population in terms of sensitivity to odor, and that the critical FPA intensity causing odor complaints was 2.5, this study tried to determine the ORGs for seven other cities of China by performing FPA tests using an FPA panel from the corresponding city. ORG values between 12.9 and 31.6 ng/L were determined, showing that a unified ORG may not be suitable for drinking water odor regulations. This study presents a novel approach for setting drinking water odor regulations. Copyright © 2014. Published by Elsevier B.V.

  16. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  17. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Treesearch

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  18. Space station electrical power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.; Twombly, Mark A.

    1988-01-01

    ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.

  19. The relevance of the slope for concentration-effect relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schobben, H.P.M.; Smit, M.; Schobben, J.H.M.

    1995-12-31

    Risk analysis is mostly based on a comparison of one value for the exposure to a chemical (PEC) and one value for the sensitivity of biota (NEC). This method enables the determination of an effect to be expected, but it is not possible to quantify the magnitude of that effect. Moreover, it is impossible to estimate the effect of a combination of chemicals. Therefore, it is necessary to use a mathematical function to describe the relation between a concentration and the subsequent effect. These relations are typically based on a log normal or log logistic distribution of the sensitivity ofmore » individuals of a species. This distribution is characterized by the median sensitivity (EC{sub 50}) and the variation between the sensitivity of individuals (being a measure for the slope of the relation). Presently the attention is focused on the median, while the slope might be even more important. Relevant exposure concentrations are typically in the range which are found in the left tail of the sensitivity distribution. In this study the slope was determined for 250 chemical-species combinations. The data were derived from original experiments and from literature. The slope is highly dependent on the exposure time; the shorter the exposure time the steeper the slope. If data for a standard exposure time [96 hours] are considered, the total variation in slope can partly be explained by the groups of organisms and chemicals. The slope for heavy metals tends to be less steep as compared to the slope of narcotic organic compounds. The slope for fish and molluscs is steeper than for crustaceans. The results of this study are presently applied in a number of risk analysis studies.« less

  20. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  1. Optimal cure cycle design for autoclave processing of thick composites laminates: A feasibility study

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.

    1985-01-01

    The thermal analysis and the calculation of thermal sensitivity of a cure cycle in autoclave processing of thick composite laminates were studied. A finite element program for the thermal analysis and design derivatives calculation for temperature distribution and the degree of cure was developed and verified. It was found that the direct differentiation was the best approach for the thermal design sensitivity analysis. In addition, the approach of the direct differentiation provided time histories of design derivatives which are of great value to the cure cycle designers. The approach of direct differentiation is to be used for further study, i.e., the optimal cycle design.

  2. A sensitivity analysis of a surface energy balance model to LAI (Leaf Area Index)

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Cannarozzo, M.; Capodici, F.; La Loggia, G.; Santangelo, T.

    2008-10-01

    The LAI is a key parameter in hydrological processes, especially in the physically based distribution models. It is a critical ecosystem attribute since physiological processes such as photosynthesis, transpiration and evaporation depend on it. The diffusion of water vapor, momentum, heat and light through the canopy is regulated by the distribution and density of the leaves, branches, twigs and stems. The LAI influences the sensible heat flux H in the surface energy balance single source models through the calculation of the roughness length and of the displacement height. The aerodynamic resistance between the soil and within-canopy source height is a function of the LAI through the roughness length. This research carried out a sensitivity analysis of some of the most important parameters of surface energy balance models to the LAI time variation, in order to take into account the effects of the LAI variation with the phenological period. Finally empirical retrieved relationships between field spectroradiometric data and the field LAI measured via a light-sensitive instrument are presented for a cereal field.

  3. Some Sensitivity Studies of Chemical Transport Simulated in Models of the Soil-Plant-Litter System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begovich, C.L.

    2002-10-28

    Fifteen parameters in a set of five coupled models describing carbon, water, and chemical dynamics in the soil-plant-litter system were varied in a sensitivity analysis of model response. Results are presented for chemical distribution in the components of soil, plants, and litter along with selected responses of biomass, internal chemical transport (xylem and phloem pathways), and chemical uptake. Response and sensitivity coefficients are presented for up to 102 model outputs in an appendix. Two soil properties (chemical distribution coefficient and chemical solubility) and three plant properties (leaf chemical permeability, cuticle thickness, and root chemical conductivity) had the greatest influence onmore » chemical transport in the soil-plant-litter system under the conditions examined. Pollutant gas uptake (SO{sub 2}) increased with change in plant properties that increased plant growth. Heavy metal dynamics in litter responded to plant properties (phloem resistance, respiration characteristics) which induced changes in the chemical cycling to the litter system. Some of the SO{sub 2} and heavy metal responses were not expected but became apparent through the modeling analysis.« less

  4. Land quality, sustainable development and environmental degradation in agricultural districts: A computational approach based on entropy indexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zambon, Ilaria, E-mail: ilaria.zambon@unitus.it; Colantoni, Andrea; Carlucci, Margherita

    Land Degradation (LD) in socio-environmental systems negatively impacts sustainable development paths. This study proposes a framework to LD evaluation based on indicators of diversification in the spatial distribution of sensitive land. We hypothesize that conditions for spatial heterogeneity in a composite index of land sensitivity are more frequently associated to areas prone to LD than spatial homogeneity. Spatial heterogeneity is supposed to be associated with degraded areas that act as hotspots for future degradation processes. A diachronic analysis (1960–2010) was performed at the Italian agricultural district scale to identify environmental factors associated with spatial heterogeneity in the degree of landmore » sensitivity to degradation based on the Environmentally Sensitive Area Index (ESAI). In 1960, diversification in the level of land sensitivity measured using two common indexes of entropy (Shannon's diversity and Pielou's evenness) increased significantly with the ESAI, indicating a high level of land sensitivity to degradation. In 2010, surface area classified as “critical” to LD was the highest in districts with diversification in the spatial distribution of ESAI values, confirming the hypothesis formulated above. Entropy indexes, based on observed alignment with the concept of LD, constitute a valuable base to inform mitigation strategies against desertification. - Highlights: • Spatial heterogeneity is supposed to be associated with degraded areas. • Entropy indexes can inform mitigation strategies against desertification. • Assessing spatial diversification in the degree of land sensitivity to degradation. • Mediterranean rural areas have an evident diversity in agricultural systems. • A diachronic analysis carried out at the Italian agricultural district scale.« less

  5. Ecological Sensitivity Evaluation of Tourist Region Based on Remote Sensing Image - Taking Chaohu Lake Area as a Case Study

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.

    2018-04-01

    Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.

  6. Comparison of the Sensitivity of Surface Downward Longwave Radiation to Changes in Water Vapor at Two High Elevation Sites

    NASA Technical Reports Server (NTRS)

    Chen, Yonghua; Naud, Catherine M.; Rangwala, Imtiaz; Landry, Christopher C.; Miller, James R.

    2014-01-01

    Among the potential reasons for enhanced warming rates in many high elevation regions is the nonlinear relationship between surface downward longwave radiation (DLR) and specific humidity (q). In this study we use ground-based observations at two neighboring high elevation sites in Southwestern Colorado that have different local topography and are 1.3 kilometers apart horizontally and 348 meters vertically. We examine the spatial consistency of the sensitivities (partial derivatives) of DLR with respect to changes in q, and the sensitivities are obtained from the Jacobian matrix of a neural network analysis. Although the relationship between DLR and q is the same at both sites, the sensitivities are higher when q is smaller, which occurs more frequently at the higher elevation site. There is a distinct hourly distribution in the sensitivities at both sites especially for high sensitivity cases, although the range is greater at the lower elevation site. The hourly distribution of the sensitivities relates to that of q. Under clear skies during daytime, q is similar between the two sites, however under cloudy skies or at night, it is not. This means that the DLR-q sensitivities are similar at the two sites during daytime but not at night, and care must be exercised when using data from one site to infer the impact of water vapor feedbacks at another site, particularly at night. Our analysis suggests that care should be exercised when using the lapse rate adjustment to infill high frequency data in a complex topographical region, particularly when one of the stations is subject to cold air pooling as found here.

  7. Development and Sensitivity Analysis of a Frost Risk model based primarily on freely distributed Earth Observation data

    NASA Astrophysics Data System (ADS)

    Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis

    2015-04-01

    The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece

  8. Design tradeoff studies and sensitivity analysis, appendices B1 - B4. [hybrid electric vehicles

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Documentation is presented for a program which separately computes fuel and energy consumption for the two modes of operation of a hybrid electric vehicle. The distribution of daily travel is specified as input data as well as the weights which the component driving cycles are given in each of the composite cycles. The possibility of weight reduction through the substitution of various materials is considered as well as the market potential for hybrid vehicles. Data relating to battery compartment weight distribution and vehicle handling analysis is tabulated.

  9. Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.

    PubMed

    Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T

    2007-03-30

    Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.

  10. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  11. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  12. Distributional Cost-Effectiveness Analysis

    PubMed Central

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2015-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  13. Pain sensitivity profiles in patients with advanced knee osteoarthritis

    PubMed Central

    Frey-Law, Laura A.; Bohr, Nicole L.; Sluka, Kathleen A.; Herr, Keela; Clark, Charles R.; Noiseux, Nicolas O.; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A.

    2016-01-01

    The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision-making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined if these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed prior to total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals that exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in four pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in five pain sensitivity profiles: a “low pressure pain” group, an “average pain” group, and three “high pain” sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however no differences in OA grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Further, these profiles are surprisingly similar to those reported in healthy populations suggesting that individual differences in pain sensitivity are a robust finding even in an older population with significant disease. PMID:27152688

  14. Analysis of Mesh Distribution Systems Considering Load Models and Load Growth Impact with Loops on System Performance

    NASA Astrophysics Data System (ADS)

    Kumar Sharma, A.; Murty, V. V. S. N.

    2014-12-01

    The distribution system is the final link between bulk power system and consumer end. A distinctive load flow solution method is used for analysis of the load flow of radial and weakly meshed network based on Kirchhoff's Current Law (KCL) and KVL. This method has excellent convergence characteristics for both radial as well as weakly meshed structure and is based on bus injection to branch current and branch-current to bus-voltage matrix. The main contribution of the paper is: (i) an analysis has been carried out for a weekly mesh network considering number of loops addition and its impact on the losses, kW and kVAr requirements from a system, and voltage profile, (ii) different load models, realistic ZIP load model and load growth impact on losses, voltage profile, kVA and kVAr requirements, (iii) impact of addition of loops on losses, voltage profile, kVA and kVAr requirements from substation, and (iv) comparison of system performance with radial distribution system. Voltage stability is a major concern in planning and operation of power systems. This paper also includes identifying the closeness critical bus which is the most sensitive to the voltage collapse in radial distribution networks. Node having minimum value of voltage stability index is the most sensitive node. Voltage stability index values are computed for meshed network with number of loops added in the system. The results have been obtained for IEEE 33 and 69 bus test system. The results have also been obtained for radial distribution system for comparison.

  15. Submillimeter-scale heterogeneity of labile phosphorus in sediments characterized by diffusive gradients in thin films and spatial analysis.

    PubMed

    Meng, Yuting; Ding, Shiming; Gong, Mengdan; Chen, Musong; Wang, Yan; Fan, Xianfang; Shi, Lei; Zhang, Chaosheng

    2018-03-01

    Sediments have a heterogeneous distribution of labile redox-sensitive elements due to a drastic downward transition from oxic to anoxic condition as a result of organic matter degradation. Characterization of the heterogeneous nature of sediments is vital for understanding of small-scale biogeochemical processes. However, there are limited reports on the related specialized methodology. In this study, the monthly distributions of labile phosphorus (P), a redox-sensitive limiting nutrient, were measured in the eutrophic Lake Taihu by Zr-oxide diffusive gradients in thin films (Zr-oxide DGT) on a two-dimensional (2D) submillimeter level. Geographical information system (GIS) techniques were used to visualize the labile P distribution at such a micro-scale, showing that the DGT-labile P was low in winter and high in summer. Spatial analysis methods, including semivariogram and Moran's I, were used to quantify the spatial variation of DGT-labile P. The distribution of DGT-labile P had clear submillimeter-scale spatial patterns with significant spatial autocorrelation during the whole year and displayed seasonal changes. High values of labile P with strong spatial variation were observed in summer, while low values of labile P with relatively uniform spatial patterns were detected in winter, demonstrating the strong influences of temperature on the mobility and spatial distribution of P in sediment profiles. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants

    PubMed Central

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants. PMID:27304876

  17. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    PubMed

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  18. Temperature-strain discrimination in distributed optical fiber sensing using phase-sensitive optical time-domain reflectometry.

    PubMed

    Lu, Xin; Soto, Marcelo A; Thévenaz, Luc

    2017-07-10

    A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.

  19. A Process Improvement Study on a Military System of Clinics to Manage Patient Demand and Resource Utilization Using Discrete-Event Simulation, Sensitivity Analysis, and Cost-Benefit Analysis

    DTIC Science & Technology

    2015-03-12

    26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to

  20. WebDISCO: a web service for distributed cox model learning without patient-level data sharing.

    PubMed

    Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2015-11-01

    The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Sobol's sensitivity analysis for a fuel cell stack assembly model with the aid of structure-selection techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Cho, Chongdu; Piao, Changhao; Choi, Hojoon

    2016-01-01

    This paper presents a novel method for identifying the main parameters affecting the stress distribution of the components used in assembly modeling of proton exchange membrane fuel cell (PEMFC) stack. This method is a combination of an approximation model and Sobol's method, which allows a fast global sensitivity analysis for a set of uncertain parameters using only a limited number of calculations. Seven major parameters, i.e., Young's modulus of the end plate and the membrane electrode assembly (MEA), the contact stiffness between the MEA and bipolar plate (BPP), the X and Y positions of the bolts, the pressure of each bolt, and the thickness of the end plate, are investigated regarding their effect on four metrics, i.e., the maximum stresses of the MEA, BPP, and end plate, and the stress distribution percentage of the MEA. The analysis reveals the individual effects of each parameter and its interactions with the other parameters. The results show that the X position of a bolt has a major influence on the maximum stresses of the BPP and end plate, whereas the thickness of the end plate has the strongest effect on both the maximum stress and the stress distribution percentage of the MEA.

  2. Spatially Resolved Chemical Imaging for Biosignature Analysis: Terrestrial and Extraterrestrial Examples

    NASA Astrophysics Data System (ADS)

    Bhartia, R.; Wanger, G.; Orphan, V. J.; Fries, M.; Rowe, A. R.; Nealson, K. H.; Abbey, W. J.; DeFlores, L. P.; Beegle, L. W.

    2014-12-01

    Detection of in situ biosignatures on terrestrial and planetary missions is becoming increasingly more important. Missions that target the Earth's deep biosphere, Mars, moons of Jupiter (including Europa), moons of Saturn (Titan and Enceladus), and small bodies such as asteroids or comets require methods that enable detection of materials for both in-situ analysis that preserve context and as a means to select high priority sample for return to Earth. In situ instrumentation for biosignature detection spans a wide range of analytical and spectroscopic methods that capitalize on amino acid distribution, chirality, lipid composition, isotopic fractionation, or textures that persist in the environment. Many of the existing analytical instruments are bulk analysis methods and while highly sensitive, these require sample acquisition and sample processing. However, by combining with triaging spectroscopic methods, biosignatures can be targeted on a surface and preserve spatial context (including mineralogy, textures, and organic distribution). To provide spatially correlated chemical analysis at multiple spatial scales (meters to microns) we have employed a dual spectroscopic approach that capitalizes on high sensitivity deep UV native fluorescence detection and high specificity deep UV Raman analysis.. Recently selected as a payload on the Mars 2020 mission, SHERLOC incorporates these optical methods for potential biosignatures detection on Mars. We present data from both Earth analogs that operate as our only examples known biosignatures and meteorite samples that provide an example of abiotic organic formation, and demonstrate how provenance effects the spatial distribution and composition of organics.

  3. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  4. Climate, not Aboriginal landscape burning, controlled the historical demography and distribution of fire-sensitive conifer populations across Australia

    PubMed Central

    Sakaguchi, Shota; Bowman, David M. J. S.; Prior, Lynda D.; Crisp, Michael D.; Linde, Celeste C.; Tsumura, Yoshihiko; Isagi, Yuji

    2013-01-01

    Climate and fire are the key environmental factors that shape the distribution and demography of plant populations in Australia. Because of limited palaeoecological records in this arid continent, however, it is unclear as to which factor impacted vegetation more strongly, and what were the roles of fire regime changes owing to human activity and megafaunal extinction (since ca 50 kya). To address these questions, we analysed historical genetic, demographic and distributional changes in a widespread conifer species complex that paradoxically grows in fire-prone regions, yet is very sensitive to fire. Genetic demographic analysis showed that the arid populations experienced strong bottlenecks, consistent with range contractions during the Last Glacial Maximum (ca 20 kya) predicted by species distribution models. In southern temperate regions, the population sizes were estimated to have been mostly stable, followed by some expansion coinciding with climate amelioration at the end of the last glacial period. By contrast, in the flammable tropical savannahs, where fire risk is the highest, demographic analysis failed to detect significant population bottlenecks. Collectively, these results suggest that the impact of climate change overwhelmed any modifications to fire regimes by Aboriginal landscape burning and megafaunal extinction, a finding that probably also applies to other fire-prone vegetation across Australia. PMID:24174110

  5. Multiple-foil microabrasion package (A0023)

    NASA Technical Reports Server (NTRS)

    Mcdonnell, J. A. M.; Ashworth, D. G.; Carey, W. C.; Flavill, R. P.; Jennison, R. C.

    1984-01-01

    The specific scientific objectives of this experiment are to measure the spatial distribution, size, velocity, radiance, and composition of microparticles in near-Earth space. The technological objectives are to measure erosion rates resulting from microparticle impacts and to evaluate thin-foil meteor 'bumpers'. The combinations of sensitivity and reliability in this experiment will provide up to 1000 impacts per month for laboratory analysis and will extend current sensitivity limits by 5 orders of magnitude in mass.

  6. Thermal analysis of microlens formation on a sensitized gelatin layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko

    2009-07-01

    We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.

  7. Variation in sensitivity, absorption and density of the central rod distribution with eccentricity.

    PubMed

    Tornow, R P; Stilling, R

    1998-01-01

    To assess the human rod photopigment distribution and sensitivity with high spatial resolution within the central +/-15 degrees and to compare the results of pigment absorption, sensitivity and rod density distribution (number of rods per square degree). Rod photopigment density distribution was measured with imaging densitometry using a modified Rodenstock scanning laser ophthalmoscope. Dark-adapted sensitivity profiles were measured with green stimuli (17' arc diameter, 1 degrees spacing) using a T ubingen manual perimeter. Sensitivity profiles were plotted on a linear scale and rod photopigment optical density distribution profiles were converted to absorption profiles of the rod photopigment layer. Both the absorption profile of the rod photopigment and the linear sensitivity profile for green stimuli show a minimum at the foveal center and increase steeply with eccentricity. The variation with eccentricity corresponds to the rod density distribution. Rod photopigment absorption profiles, retinal sensitivity profiles, and the rod density distribution are linearly related within the central +/-15 degrees. This is in agreement with theoretical considerations. Both methods, imaging retinal densitometry using a scanning laser ophthalmoscope and dark-adapted perimetry with small green stimuli, are useful for assessing the central rod distribution and sensitivity. However, at present, both methods have limitations. Suggestions for improving the reliability of both methods are given.

  8. A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.

    PubMed

    Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer

    2016-09-10

    When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  10. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  11. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  12. Environmental determinants of the spatial distribution of Angiostrongylus vasorum, Crenosoma vulpis and Eucoleus aerophilus in Hungary.

    PubMed

    Tolnai, Z; Széll, Z; Sréter, T

    2015-01-30

    Angiostrongylus vasorum, Crenosoma vulpis and Eucoleus aerophilus (syn. Capillaria aerophila) are the most important lungworm species infecting wild and domesticated canids in Europe. To investigate the spatial distribution of these parasites and the factors influencing their circulation in the fox populations, 937 red foxes (Vulpes vulpes) were tested for lungworm infection in Hungary. The prevalence of A. vasorum, C. vulpis and E. aerophilus infection was high (17.9, 24.6 and 61.7%). The distribution pattern of infection in foxes and the relationship of this pattern with landscape and climate was analyzed by geographic information system. Based on the analysis, the annual precipitation was the major determinant of the spatial distribution of A. vasorum and C. vulpis and E. aerophilus. Nevertheless, the mean annual temperature also influenced the distribution of A. vasorum and E. aerophilus. The positive relationship with annual precipitation and the negative relationship with mean annual temperature can be attributed to the sensitivity of larvae, eggs and intermediate hosts (snails and slugs) of lungworms for desiccation. Based on the highly clumped distribution of A. vasorum and C. vulpis, the indirect life cycle (larvae, slugs and snails) of these parasites seems to be particularly sensitive for environmental effects. The distribution of E. aerophilus was considerably less clumped indicating a lower sensitivity of the direct life cycle (eggs) of this parasite for environmental factors. Based on these results, lungworm infections in canids including dogs can be expected mainly in relatively wet and cool areas. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Plexin D1 determines body fat distribution by regulating the type V collagen microenvironment in visceral adipose tissue.

    PubMed

    Minchin, James E N; Dahlman, Ingrid; Harvey, Christopher J; Mejhert, Niklas; Singh, Manvendra K; Epstein, Jonathan A; Arner, Peter; Torres-Vázquez, Jesús; Rawls, John F

    2015-04-07

    Genome-wide association studies have implicated PLEXIN D1 (PLXND1) in body fat distribution and type 2 diabetes. However, a role for PLXND1 in regional adiposity and insulin resistance is unknown. Here we use in vivo imaging and genetic analysis in zebrafish to show that Plxnd1 regulates body fat distribution and insulin sensitivity. Plxnd1 deficiency in zebrafish induced hyperplastic morphology in visceral adipose tissue (VAT) and reduced lipid storage. In contrast, subcutaneous adipose tissue (SAT) growth and morphology were unaffected, resulting in altered body fat distribution and a reduced VAT:SAT ratio in zebrafish. A VAT-specific role for Plxnd1 appeared conserved in humans, as PLXND1 mRNA was positively associated with hypertrophic morphology in VAT, but not SAT. In zebrafish plxnd1 mutants, the effect on VAT morphology and body fat distribution was dependent on induction of the extracellular matrix protein collagen type V alpha 1 (col5a1). Furthermore, after high-fat feeding, zebrafish plxnd1 mutant VAT was resistant to expansion, and excess lipid was disproportionately deposited in SAT, leading to an even greater exacerbation of altered body fat distribution. Plxnd1-deficient zebrafish were protected from high-fat-diet-induced insulin resistance, and human VAT PLXND1 mRNA was positively associated with type 2 diabetes, suggesting a conserved role for PLXND1 in insulin sensitivity. Together, our findings identify Plxnd1 as a novel regulator of VAT growth, body fat distribution, and insulin sensitivity in both zebrafish and humans.

  14. Frequency mismatch in stimulated scattering processes: An important factor for the transverse distribution of scattered light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Tao; Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang, Sichuan 621900; Zheng, Jian, E-mail: jzheng@ustc.edu.cn

    2016-06-15

    A 2D cylindrically symmetric model with inclusion of both diffraction and self-focus effects is developed to deal with the stimulated scattering processes of a single hotspot. The calculated results show that the transverse distribution of the scattered light is sensitive to the longitudinal profiles of the plasma parameters. The analysis of the evolution of the scattered light indicates that it is the frequency mismatch of coupling due to the inhomogeneity of plasmas that determines the transverse distribution of the scattered light.

  15. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  16. The Relationship of Mean Platelet Volume/Platelet Distribution Width and Duodenal Ulcer Perforation.

    PubMed

    Fan, Zhe; Zhuang, Chengjun

    2017-03-01

    Duodenal ulcer perforation (DUP) is a severe acute abdominal disease. Mean platelet volume (MPV) and platelet distribution width (PDW) are two platelet parameters, participating in many inflammatory processes. This study aims to investigate the relation of MPV/PDW and DUP. A total of 165 patients were studied retrospectively, including 21 females and 144 males. The study included two groups: 87 normal patients (control group) and 78 duodenal ulcer perforation patients (DUP group). Routine blood parameters were collected for analysis including white blood cell count (WBC), neutrophil ratio (NR), platelet count (PLT), MPV and PDW. Receiver operating curve (ROC) analysis was applied to evaluate the parameters' sensitivity. No significant differences were observed between the control group and DUP group in age and gender. WBC, NR and PDW were significantly increased in the DUP group ( P <0.001, respectively); PLT and MPV were significantly decreased in the DUP group ( P <0.001, respectively) compared to controls. MPV had the high sensitivity. Our results suggested a potential association between MPV/PDW and disease activity in DUP patients, and high sensitivity of MPV. © 2017 by the Association of Clinical Scientists, Inc.

  17. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  18. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  19. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  20. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  1. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  2. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  3. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  4. Direct magnetic field estimation based on echo planar raw data.

    PubMed

    Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim

    2010-07-01

    Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.

  5. Analysis of rainfall distribution in Kelantan river basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Ros, Faizah; Tosaka, Hiroyuki

    2018-03-01

    Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.

  6. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.

  7. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529

  8. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  9. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  10. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less

  11. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  12. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  13. Fully automated screening of immunocytochemically stained specimens for early cancer detection

    NASA Astrophysics Data System (ADS)

    Bell, André A.; Schneider, Timna E.; Müller-Frank, Dirk A. C.; Meyer-Ebrecht, Dietrich; Böcking, Alfred; Aach, Til

    2007-03-01

    Cytopathological cancer diagnoses can be obtained less invasive than histopathological investigations. Cells containing specimens can be obtained without pain or discomfort, bloody biopsies are avoided, and the diagnosis can, in some cases, even be made earlier. Since no tissue biopsies are necessary these methods can also be used in screening applications, e.g., for cervical cancer. Among the cytopathological methods a diagnosis based on the analysis of the amount of DNA in individual cells achieves high sensitivity and specificity. Yet this analysis is time consuming, which is prohibitive for a screening application. Hence, it will be advantageous to retain, by a preceding selection step, only a subset of suspicious specimens. This can be achieved using highly sensitive immunocytochemical markers like p16 ink4a for preselection of suspicious cells and specimens. We present a method to fully automatically acquire images at distinct positions at cytological specimens using a conventional computer controlled microscope and an autofocus algorithm. Based on the thus obtained images we automatically detect p16 ink4a-positive objects. This detection in turn is based on an analysis of the color distribution of the p16 ink4a marker in the Lab-colorspace. A Gaussian-mixture-model is used to describe this distribution and the method described in this paper so far achieves a sensitivity of up to 90%.

  14. Single atom catalysts on amorphous supports: A quenched disorder perspective

    NASA Astrophysics Data System (ADS)

    Peters, Baron; Scott, Susannah L.

    2015-03-01

    Phenomenological models that invoke catalyst sites with different adsorption constants and rate constants are well-established, but computational and experimental methods are just beginning to provide atomically resolved details about amorphous surfaces and their active sites. This letter develops a statistical transformation from the quenched disorder distribution of site structures to the distribution of activation energies for sites on amorphous supports. We show that the overall kinetics are highly sensitive to the precise nature of the low energy tail in the activation energy distribution. Our analysis motivates further development of systematic methods to identify and understand the most reactive members of the active site distribution.

  15. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  16. ANSYS-based birefringence property analysis of side-hole fiber induced by pressure and temperature

    NASA Astrophysics Data System (ADS)

    Zhou, Xinbang; Gong, Zhenfeng

    2018-03-01

    In this paper, we theoretically investigate the influences of pressure and temperature on the birefringence property of side-hole fibers with different shapes of holes using the finite element analysis method. A physical mechanism of the birefringence of the side-hole fiber is discussed with the presence of different external pressures and temperatures. The strain field distribution and birefringence values of circular-core, rectangular-core, and triangular-core side-hole fibers are presented. Our analysis shows the triangular-core side-hole fiber has low temperature sensitivity which weakens the cross sensitivity of temperature and strain. Additionally, an optimized structure design of the side-hole fiber is presented which can be used for the sensing application.

  17. Exhaled Aerosol Pattern Discloses Lung Structural Abnormality: A Sensitivity Study Using Computational Modeling and Fractal Analysis

    PubMed Central

    Xi, Jinxiang; Si, Xiuhua A.; Kim, JongWon; Mckee, Edward; Lin, En-Bing

    2014-01-01

    Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. PMID:25105680

  18. Micro-heterogeneity versus clustering in binary mixtures of ethanol with water or alkanes.

    PubMed

    Požar, Martina; Lovrinčević, Bernarda; Zoranić, Larisa; Primorać, Tomislav; Sokolić, Franjo; Perera, Aurélien

    2016-08-24

    Ethanol is a hydrogen bonding liquid. When mixed in small concentrations with water or alkanes, it forms aggregate structures reminiscent of, respectively, the direct and inverse micellar aggregates found in emulsions, albeit at much smaller sizes. At higher concentrations, micro-heterogeneous mixing with segregated domains is found. We examine how different statistical methods, namely correlation function analysis, structure factor analysis and cluster distribution analysis, can describe efficiently these morphological changes in these mixtures. In particular, we explain how the neat alcohol pre-peak of the structure factor evolves into the domain pre-peak under mixing conditions, and how this evolution differs whether the co-solvent is water or alkane. This study clearly establishes the heuristic superiority of the correlation function/structure factor analysis to study the micro-heterogeneity, since cluster distribution analysis is insensitive to domain segregation. Correlation functions detect the domains, with a clear structure factor pre-peak signature, while the cluster techniques detect the cluster hierarchy within domains. The main conclusion is that, in micro-segregated mixtures, the domain structure is a more fundamental statistical entity than the underlying cluster structures. These findings could help better understand comparatively the radiation scattering experiments, which are sensitive to domains, versus the spectroscopy-NMR experiments, which are sensitive to clusters.

  19. Computational Analysis of the Combustion Processes in an Axisymmetric, RBCC Flowpath

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.; Yungster, Shaye

    2001-01-01

    Computational fluid dynamic simulations have been used to study the combustion processes within an axisymmetric, RBCC flowpath. Two distinct operating modes have been analyzed to date, including the independent ramjet stream (IRS) cycle and the supersonic combustion ramjet (scramJet) cycle. The IRS cycle investigation examined the influence of fuel-air ratio, fuel distribution, and rocket chamber pressure upon the combustion physics and thermal choke characteristics. Results indicate that adjustment of the amount and radial distribution of fuel can control the thermal choke point. The secondary massflow rate was very sensitive to the fuel-air ratio and the rocket chamber pressure. The scramjet investigation examined the influence of fuel-air ratio and fuel injection schedule upon combustion performance estimates. An analysis of the mesh-dependence of these calculations was presented. Jet penetration data was extracted from the three-dimensional simulations and compared favorably with experimental correlations of similar flows. Results indicate that combustion efficiency was very sensitive to the fuel schedule.

  20. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  1. Measuring precarious employment in times of crisis: the revised Employment Precariousness Scale (EPRES) in Spain.

    PubMed

    Vives, Alejandra; González, Francisca; Moncada, Salvador; Llorens, Clara; Benach, Joan

    2015-01-01

    This study examines the psychometric properties of the revised Employment Precariousness Scale (EPRES-2010) in a context of economic crisis and growing unemployment. Data correspond to salaried workers with a contract (n=4,750) from the second Psychosocial Work Environment Survey (Spain, 2010). Analyses included acceptability, scale score distributions, Cronbach's alpha coefficient and exploratory factor analysis. Response rates were 80% or above, scores were widely distributed with reductions in floor effects for temporariness among permanent workers and for vulnerability. Cronbach's alpha coefficients were 0.70 or above; exploratory factor analysis confirmed the theoretical allocation of 21 out of 22 items. The revised version of the EPRES demonstrated good metric properties and improved sensitivity to worker vulnerability and employment instability among permanent workers. Furthermore, it was sensitive to increased levels of precariousness in some dimensions despite decreases in others, demonstrating responsiveness to the context of the economic crisis affecting the Spanish labour market. Copyright © 2015 SESPAS. Published by Elsevier Espana. All rights reserved.

  2. Probabilistic analysis of a materially nonlinear structure

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.

    1990-01-01

    A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.

  3. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  4. Coupled semivariogram uncertainty of hydrogeological and geophysical data on capture zone uncertainty analysis

    USGS Publications Warehouse

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.

    2008-01-01

    This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.

  5. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  6. Neutron Physics Division progress report for period ending February 28, 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, F.C.

    1977-05-01

    Summaries are given of research progress in the following areas: (1) measurements of cross sections and related quantities, (2) cross section evaluations and theory, (3) cross section processing, testing, and sensitivity analysis, (4) integral experiments and their analyses, (5) development of methods for shield and reactor analyses, (6) analyses for specific systems or applications, and (7) information analysis and distribution. (SDF)

  7. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) Thermal Soak Test

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-01-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  8. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) thermal soak test

    NASA Astrophysics Data System (ADS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-09-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  9. Quadrant photodetector sensitivity.

    PubMed

    Manojlović, Lazo M

    2011-07-10

    A quantitative theoretical analysis of the quadrant photodetector (QPD) sensitivity in position measurement is presented. The Gaussian light spot irradiance distribution on the QPD surface was assumed to meet most of the real-life applications of this sensor. As the result of the mathematical treatment of the problem, we obtained, in a closed form, the sensitivity function versus the ratio of the light spot 1/e radius and the QPD radius. The obtained result is valid for the full range of the ratios. To check the influence of the finite light spot radius on the interaxis cross talk and linearity, we also performed a mathematical analysis to quantitatively measure these types of errors. An optimal range of the ratio of light spot radius and QPD radius has been found to simultaneously achieve low interaxis cross talk and high linearity of the sensor. © 2011 Optical Society of America

  10. The Review of Nuclear Microscopy Techniques: An Approach for Nondestructive Trace Elemental Analysis and Mapping of Biological Materials.

    PubMed

    Mulware, Stephen Juma

    2015-01-01

    The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented.

  11. Studies on the synthesis, spectroscopic analysis, molecular docking and DFT calculations on 1-hydroxy-2-(4-hydroxyphenyl)-4,5-dimethyl-imidazol 3-oxide

    NASA Astrophysics Data System (ADS)

    Benzon, K. B.; Sheena, Mary Y.; Panicker, C. Yohannan; Armaković, Stevan; Armaković, Sanja J.; Pradhan, Kiran; Nanda, Ashis Kumar; Van Alsenoy, C.

    2017-02-01

    In this work we have investigated in details the spectroscopic and reactive properties of newly synthesized imidazole derivative, namely the 1-hydroxy-2-(4-hydroxyphenyl)-4,5-dimethyl-imidazole 3-oxide (HHPDI). FT-IR and NMR spectra were measured and compared with theoretically obtained data provided by calculations of potential energy distribution and chemical shifts, respectively. Insight into the global reactivity properties has been obtained by analysis of frontier molecular orbitals, while local reactivity properties have been investigated by analysis of charge distribution, ionization energies and Fukui functions. NBO analysis was also employed to understand the stability of molecule, while hyperpolarizability has been calculated in order to assess the nonlinear optical properties of title molecule. Sensitivity towards autoxidation and hydrolysis mechanisms has been investigated by calculations of bond dissociation energies and radial distribution functions, respectively. Molecular docking study was also performed, in order to determine the pharmaceutical potential of the investigated molecule.

  12. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  13. Proposed linear energy transfer areal detector for protons using radiochromic film.

    PubMed

    Mayer, Rulon; Lin, Liyong; Fager, Marcus; Douglas, Dan; McDonough, James; Carabe, Alejandro

    2015-04-01

    Radiation therapy depends on predictably and reliably delivering dose to tumors and sparing normal tissues. Protons with kinetic energy of a few hundred MeV can selectively deposit dose to deep seated tumors without an exit dose, unlike x-rays. The better dose distribution is attributed to a phenomenon known as the Bragg peak. The Bragg peak is due to relatively high energy deposition within a given distance or high Linear Energy Transfer (LET). In addition, biological response to radiation depends on the dose, dose rate, and localized energy deposition patterns or LET. At present, the LET can only be measured at a given fixed point and the LET spatial distribution can only be inferred from calculations. The goal of this study is to develop and test a method to measure LET over extended areas. Traditionally, radiochromic films are used to measure dose distribution but not for LET distribution. We report the first use of these films for measuring the spatial distribution of the LET deposited by protons. The radiochromic film sensitivity diminishes for large LET. A mathematical model correlating the film sensitivity and LET is presented to justify relating LET and radiochromic film relative sensitivity. Protons were directed parallel to radiochromic film sandwiched between solid water slabs. This study proposes the scaled-normalized difference (SND) between the Treatment Planning system (TPS) and measured dose as the metric describing the LET. The SND is correlated with a Monte Carlo (MC) calculation of the LET spatial distribution for a large range of SNDs. A polynomial fit between the SND and MC LET is generated for protons having a single range of 20 cm with narrow Bragg peak. Coefficients from these fitted polynomial fits were applied to measured proton dose distributions with a variety of ranges. An identical procedure was applied to the protons deposited from Spread Out Bragg Peak and modulated by 5 cm. Gamma analysis is a method for comparing the calculated LET with the LET measured using radiochromic film at the pixel level over extended areas. Failure rates using gamma analysis are calculated for areas in the dose distribution using parameters of 25% of MC LET and 3 mm. The processed dose distributions find 5%-10% failure rates for the narrow 12.5 and 15 cm proton ranges and 10%-15% for proton ranges of 15, 17.5, and 20 cm and modulated by 5 cm. It is found through gamma analysis that the measured proton energy deposition in radiochromic film and TPS can be used to determine LET. This modified film dosimetry provides an experimental areal LET measurement that can verify MC calculations, support LET point measurements, possibly enhance biologically based proton treatment planning, and determine the polymerization process within the radiochromic film.

  14. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  15. Vibrational analysis and quantum chemical calculations of 2,2‧-bipyridine Zinc(II) halide complexes

    NASA Astrophysics Data System (ADS)

    Ozel, Aysen E.; Kecel, Serda; Akyuz, Sevim

    2007-05-01

    In this study the molecular structure and vibrational spectra of Zn(2,2'-bipyridine)X 2 (X = Cl and Br) complexes were studied in their ground states by computational vibrational study and scaled quantum mechanical (SQM) analysis. The geometry optimization, vibrational wavenumber and intensity calculations of free and coordinated 2,2'-bipyridine were carried out with the Gaussian03 program package by using Hartree-Fock (HF) and Density Functional Theory (DFT) with B3LYP functional and 6-31G (d,p) basis set. The total energy distributions (TED) of the vibrational modes were calculated by using Scaled Quantum Mechanical (SQM) analysis. Fundamentals were characterised by their total energy distributions. Coordination sensitive modes of 2,2'-bipyridine were determined.

  16. SHERLOC on Mars 2020

    NASA Astrophysics Data System (ADS)

    Beegle, L. W.; Bhartia, R.; DeFlores, L. P.; Abbey, W.; Asher, S. A.; Burton, A. S.; Fries, M.; Conrad, P. G.; Clegg, S. M.; Wiens, R. C.; Edgett, K. S.; Ehlmann, B. L.; Nealson, K. H.; Minitti, M. E.; Popp, J.; Langenhorst, F.; Sobron, P.; Steele, A.; Williford, K. H.; Yingst, R. A.

    2017-12-01

    The Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals (SHERLOC) investigation is part of the Mars 2020 integrated payload. SHERLOC enables non-contact, spatially resolved, and highly sensitivity detection and characterization of organics and minerals in the Martian surface and near subsurface. SHERLOC is an arm-mounted, Deep UV (DUV) resonance Raman and fluorescence spectrometer utilizing a 248.6-nm DUV laser. Deep UV induced native fluorescence is very sensitive to condensed carbon and aromatic organics, enabling detection at or below 10-6 w/w (1 ppm) at <100 µm spatial scales. SHERLOC's deep UV resonance Raman enables detection and classification of aromatic and aliphatic organics with sensitivities of 10-2 to below 10-4 w/w. In addition to organics, the deep UV Raman enables detection and classification of minerals relevant to aqueous chemistry with grain sizes below 20 µm. SHERLOC will be able to map the distribution of organic material with respect to visible features and minerals that are identifiable with the Raman spectrometer. These maps will enable analysis of the distribution of organics with minerals.

  17. Nonuniform distribution of phase noise in distributed acoustic sensing based on phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Lu, Yang; Meng, Zhou

    2017-10-01

    A phase-sensitive optical time-domain reflectometry (∅-OTDR) implements distributed acoustic sensing (DAS) due to its ability for high sensitivity vibration measurement. Phase information of acoustic vibration events can be acquired by interrogation of the vibration-induced phase change between coherent Rayleigh scattering light from two points of the sensing fiber. And DAS can be realized when applying phase generated carrier (PGC) algorithm to the whole sensing fiber while the sensing fiber is transformed into a series of virtual sensing channels. Minimum detectable vibration of a ∅-OTDR is limited by phase noise level. In this paper, nonuniform distribution of phase noise of virtual sensing channels in a ∅-OTDR is investigated theoretically and experimentally. Correspondence between the intensity of Rayleigh scattering light and interference fading as well as polarization fading is analyzed considering inner interference of coherent Rayleigh light scattered from a multitude of scatters within pulse duration, and intensity noise related to the intensity of Rayleigh scattering light can be converted to phase noise while measuring vibration-induced phase change. Experiments are performed and the results confirm the predictions of the theoretical analysis. This study is essential for acquiring insight into nonuniformity of phase noise in DAS based on a ∅-OTDR, and would put forward some feasible methods to eliminate the effect of interference fading and polarization fading and optimize the minimum detectable vibration of a ∅-OTDR.

  18. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  19. COBRA ATD minefield detection model initial performance analysis

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.

  20. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  1. Assessing Impact of Distributions and Dependencies in Analysis of Alternatives of System of Systems: Phase 3

    DTIC Science & Technology

    2013-12-19

    32 3.3 An Approach for Evaluating System-of-Systems Operational Benefits of a...delay of a flight under IMC ............................................... 41 Figure 15: Sensitivity of delay of each of the four segments to...85 Figure 43: Generic SoS node behaviors

  2. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    PubMed

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  3. Sources, distribution and export coefficient of phosphorus in lowland polders of Lake Taihu Basin, China.

    PubMed

    Huang, Jiacong; Gao, Junfeng; Jiang, Yong; Yin, Hongbin; Amiri, Bahman Jabbarian

    2017-12-01

    Identifying phosphorus (P) sources, distribution and export from lowland polders is important for P pollution management, however, is challenging due to the high complexity of hydrological and P transport processes in lowland areas. In this study, the spatial pattern and temporal dynamics of P export coefficient (PEC) from all the 2539 polders in Lake Taihu Basin, China were estimated using a coupled P model for describing P dynamics in a polder system. The estimated amount of P export from polders in Lake Taihu Basin during 2013 was 1916.2 t/yr, with a spatially-averaged PEC of 1.8 kg/ha/yr. PEC had peak values (more than 4.0 kg/ha/yr) in the polders near/within the large cities, and was high during the rice-cropping season. Sensitivity analysis based on the coupled P model revealed that the sensitive factors controlling the PEC varied spatially and changed through time. Precipitation and air temperature were the most sensitive factors controlling PEC. Culvert controlling and fertilization were sensitive factors controlling PEC during some periods. This study demonstrated an estimation of PEC from 2539 polders in Lake Taihu Basin, and an identification of sensitive environmental factors affecting PEC. The investigation of polder P export in a watershed scale is helpful for water managers to learn the distribution of P sources, to identify key P sources, and thus to achieve best management practice in controlling P export from lowland areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.

    PubMed

    Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J

    2016-01-01

    We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  6. Two-layer convective heating prediction procedures and sensitivities for blunt body reentry vehicles

    NASA Technical Reports Server (NTRS)

    Bouslog, Stanley A.; An, Michael Y.; Wang, K. C.; Tam, Luen T.; Caram, Jose M.

    1993-01-01

    This paper provides a description of procedures typically used to predict convective heating rates to hypersonic reentry vehicles using the two-layer method. These procedures were used to compute the pitch-plane heating distributions to the Apollo geometry for a wind tunnel test case and for three flight cases. Both simple engineering methods and coupled inviscid/boundary layer solutions were used to predict the heating rates. The sensitivity of the heating results in the choice of metrics, pressure distributions, boundary layer edge conditions, and wall catalycity used in the heating analysis were evaluated. Streamline metrics, pressure distributions, and boundary layer edge properties were defined from perfect gas (wind tunnel case) and chemical equilibrium and nonequilibrium (flight cases) inviscid flow-field solutions. The results of this study indicated that the use of CFD-derived metrics and pressures provided better predictions of heating when compared to wind tunnel test data. The study also showed that modeling entropy layer swallowing and ionization had little effect on the heating predictions.

  7. Total Scattering and Pair Distribution Function Analysis in Modelling Disorder in PZN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitfield, Ross E.; Goossens, Darren J; Welberry, T. R.

    2016-01-01

    The ability of the pair distribution function (PDF) analysis of total scattering (TS) from a powder to determine the local ordering in ferroelectric PZN (PbZn 1/3Nb 2/3O 3) has been explored by comparison with a model established using single-crystal diffuse scattering (SCDS). While X-ray PDF analysis is discussed, the focus is on neutron diffraction results because of the greater extent of the data and the sensitivity of the neutron to oxygen atoms, the behaviour of which is important in PZN. The PDF was shown to be sensitive to many effects not apparent in the average crystal structure, including variations inmore » the B-site—O separation distances and the fact that (110) Pb 2+ displacements are most likely. A qualitative comparison between SCDS and the PDF shows that some features apparent in SCDS were not apparent in the PDF. These tended to pertain to short-range correlations in the structure, rather than to interatomic separations. For example, in SCDS the short-range alternation of the B-site cations was quite apparent in diffuse scattering at (½ ½ ½), whereas it was not apparent in the PDF.« less

  8. Analysis of painted arts by energy sensitive radiographic techniques with the Pixel Detector Timepix

    NASA Astrophysics Data System (ADS)

    Zemlicka, J.; Jakubek, J.; Kroupa, M.; Hradil, D.; Hradilova, J.; Mislerova, H.

    2011-01-01

    Non-invasive techniques utilizing X-ray radiation offer a significant advantage in scientific investigations of painted arts and other cultural artefacts such as painted artworks or statues. In addition, there is also great demand for a mobile analytical and real-time imaging device given the fact that many fine arts cannot be transported. The highly sensitive hybrid semiconductor pixel detector, Timepix, is capable of detecting and resolving subtle and low-contrast differences in the inner composition of a wide variety of objects. Moreover, it is able to map the surface distribution of the contained elements. Several transmission and emission techniques are presented which have been proposed and tested for the analysis of painted artworks. This study focuses on the novel techniques of X-ray transmission radiography (conventional and energy sensitive) and X-ray induced fluorescence imaging (XRF) which can be realised at the table-top scale with the state-of-the-art pixel detector Timepix. Transmission radiography analyses the changes in the X-ray beam intensity caused by specific attenuation of different components in the sample. The conventional approach uses all energies from the source spectrum for the creation of the image while the energy sensitive alternative creates images in given energy intervals which enable identification and separation of materials. The XRF setup is based on the detection of characteristic radiation induced by X-ray photons through a pinhole geometry collimator. The XRF method is extremely sensitive to the material composition but it creates only surface maps of the elemental distribution. For the purpose of the analysis several sets of painted layers have been prepared in a restoration laboratory. The composition of these layers corresponds to those of real historical paintings from the 19th century. An overview of the current status of our methods will be given with respect to the instrumentation and the application in the field of cultural heritage.

  9. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  10. The structure of paranoia in the general population.

    PubMed

    Bebbington, Paul E; McBride, Orla; Steel, Craig; Kuipers, Elizabeth; Radovanovic, Mirjana; Brugha, Traolach; Jenkins, Rachel; Meltzer, Howard I; Freeman, Daniel

    2013-06-01

    Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.

  11. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  12. Multiple active myofascial trigger points and pressure pain sensitivity maps in the temporalis muscle are related in women with chronic tension type headache.

    PubMed

    Fernández-de-las-Peñas, César; Caminero, Ana B; Madeleine, Pascal; Guillem-Mesado, Amparo; Ge, Hong-You; Arendt-Nielsen, Lars; Pareja, Juan A

    2009-01-01

    To describe the common locations of active trigger points (TrPs) in the temporalis muscle and their referred pain patterns in chronic tension type headache (CTTH), and to determine if pressure sensitivity maps of this muscle can be used to describe the spatial distribution of active TrPs. Forty women with CTTH were included. An electronic pressure algometer was used to assess pressure pain thresholds (PPT) from 9 points over each temporalis muscle: 3 points in the anterior, medial and posterior part, respectively. Both muscles were examined for the presence of active TrPs over each of the 9 points. The referred pain pattern of each active TrP was assessed. Two-way analysis of variance detected significant differences in mean PPT levels between the measurement points (F=30.3; P<0.001), but not between sides (F=2.1; P=0.2). PPT scores decreased from the posterior to the anterior column (P<0.001). No differences were found in the number of active TrPs (F=0.3; P=0.9) between the dominant side the nondominant side. Significant differences were found in the distribution of the active TrPs (chi2=12.2; P<0.001): active TrPs were mostly found in the anterior column and in the middle of the muscle belly. The analysis of variance did not detect significant differences in the referred pain pattern between active TrPs (F=1.1, P=0.4). The topographical pressure pain sensitivity maps showed the distinct distribution of the TrPs indicated by locations with low PPTs. Multiple active TrPs in the temporalis muscle were found, particularly in the anterior column and in the middle of the muscle belly. Bilateral posterior to anterior decreased distribution of PPTs in the temporalis muscle in women with CTTH was found. The locations of active TrPs in the temporalis muscle corresponded well to the muscle areas with lower PPT, supporting the relationship between multiple active muscle TrPs and topographical pressure sensitivity maps in the temporalis muscle in women with CTTH.

  13. Design and simulation analysis of a novel pressure sensor based on graphene film

    NASA Astrophysics Data System (ADS)

    Nie, M.; Xia, Y. H.; Guo, A. Q.

    2018-02-01

    A novel pressure sensor structure based on graphene film as the sensitive membrane was proposed in this paper, which solved the problem to measure low and minor pressure with high sensitivity. Moreover, the fabrication process was designed which can be compatible with CMOS IC fabrication technology. Finite element analysis has been used to simulate the displacement distribution of the thin movable graphene film of the designed pressure sensor under the different pressures with different dimensions. From the simulation results, the optimized structure has been obtained which can be applied in the low measurement range from 10hPa to 60hPa. The length and thickness of the graphene film could be designed as 100μm and 0.2μm, respectively. The maximum mechanical stress on the edge of the sensitive membrane was 1.84kPa, which was far below the breaking strength of the silicon nitride and graphene film.

  14. [Effects of topography on the diversity and distribution pattern of ground plants in karst montane forests in Southwest Guangxi, China].

    PubMed

    Yuan, Tie-Xiang; Zhang, He-Ping; Ou, Zhi-Yang; Tan, Yi-Bo

    2014-10-01

    Covariance analysis, curve-fitting, and canonical correspondence analysis (CCA) were used to explore the effects of topographic factors on the plant diversity and distribution patterns of ground flora with different growth forms in the karst mountains of Southwest Guangxi, China. A total of 152 ground plants were recorded. Among them, 37 species were ferns, 44 species herbs, 9 species lianas, and 62 species shrubs. Covariance analysis revealed that altitude significantly correlated with the individual number and richness of ground plants, and slope aspect had a significant effect on richness. Statistical analyses showed a highly significant nonlinear correlation between the individual number or richness of ground plants and altitude. Results of CCA revealed that slope aspect had a significant effect on the distribution pattern of ferns, and slope had a significant effect on the distribution patterns of herbs, lianas and shrubs. Ferns were more sensitive than herbs, lianas and shrubs to changes in heat and soil water caused by aspect. The effect of slope was stronger than that of elevation on soil water and nutrients, and it was the most important topographic factor that affected the distribution patterns of herbs, lianas and shrubs in this region.

  15. Valuing vaccines using value of statistical life measures.

    PubMed

    Laxminarayan, Ramanan; Jamison, Dean T; Krupnick, Alan J; Norheim, Ole F

    2014-09-03

    Vaccines are effective tools to improve human health, but resources to pursue all vaccine-related investments are lacking. Benefit-cost and cost-effectiveness analysis are the two major methodological approaches used to assess the impact, efficiency, and distributional consequences of disease interventions, including those related to vaccinations. Childhood vaccinations can have important non-health consequences for productivity and economic well-being through multiple channels, including school attendance, physical growth, and cognitive ability. Benefit-cost analysis would capture such non-health benefits; cost-effectiveness analysis does not. Standard cost-effectiveness analysis may grossly underestimate the benefits of vaccines. A specific willingness-to-pay measure is based on the notion of the value of a statistical life (VSL), derived from trade-offs people are willing to make between fatality risk and wealth. Such methods have been used widely in the environmental and health literature to capture the broader economic benefits of improving health, but reservations remain about their acceptability. These reservations remain mainly because the methods may reflect ability to pay, and hence be discriminatory against the poor. However, willingness-to-pay methods can be made sensitive to income distribution by using appropriate income-sensitive distributional weights. Here, we describe the pros and cons of these methods and how they compare against standard cost-effectiveness analysis using pure health metrics, such as quality-adjusted life years (QALYs) and disability-adjusted life years (DALYs), in the context of vaccine priorities. We conclude that if appropriately used, willingness-to-pay methods will not discriminate against the poor, and they can capture important non-health benefits such as financial risk protection, productivity gains, and economic wellbeing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  17. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  18. In vivo observation of age-related structural changes of dermal collagen in human facial skin using collagen-sensitive second harmonic generation microscope equipped with 1250-nm mode-locked Cr:Forsterite laser

    NASA Astrophysics Data System (ADS)

    Yasui, Takeshi; Yonetsu, Makoto; Tanaka, Ryosuke; Tanaka, Yuji; Fukushima, Shu-ichiro; Yamashita, Toyonobu; Ogura, Yuki; Hirao, Tetsuji; Murota, Hiroyuki; Araki, Tsutomu

    2013-03-01

    In vivo visualization of human skin aging is demonstrated using a Cr:Forsterite (Cr:F) laser-based, collagen-sensitive second harmonic generation (SHG) microscope. The deep penetration into human skin, as well as the specific sensitivity to collagen molecules, achieved by this microscope enables us to clearly visualize age-related structural changes of collagen fiber in the reticular dermis. Here we investigated intrinsic aging and/or photoaging in the male facial skin. Young subjects show dense distributions of thin collagen fibers, whereas elderly subjects show coarse distributions of thick collagen fibers. Furthermore, a comparison of SHG images between young and elderly subjects with and without a recent life history of excessive sun exposure show that a combination of photoaging with intrinsic aging significantly accelerates skin aging. We also perform image analysis based on two-dimensional Fourier transformation of the SHG images and extracted an aging parameter for human skin. The in vivo collagen-sensitive SHG microscope will be a powerful tool in fields such as cosmeceutical sciences and anti-aging dermatology.

  19. Diagnostic accuracy of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for the detection of antibodies against Neospora caninum in milk from dairy cows.

    PubMed

    Chatziprodromidou, I P; Apostolou, T

    2018-04-01

    The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.

  20. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  1. Risk finance for catastrophe losses with Pareto-calibrated Lévy-stable severities.

    PubMed

    Powers, Michael R; Powers, Thomas Y; Gao, Siwei

    2012-11-01

    For catastrophe losses, the conventional risk finance paradigm of enterprise risk management identifies transfer, as opposed to pooling or avoidance, as the preferred solution. However, this analysis does not necessarily account for differences between light- and heavy-tailed characteristics of loss portfolios. Of particular concern are the decreasing benefits of diversification (through pooling) as the tails of severity distributions become heavier. In the present article, we study a loss portfolio characterized by nonstochastic frequency and a class of Lévy-stable severity distributions calibrated to match the parameters of the Pareto II distribution. We then propose a conservative risk finance paradigm that can be used to prepare the firm for worst-case scenarios with regard to both (1) the firm's intrinsic sensitivity to risk and (2) the heaviness of the severity's tail. © 2012 Society for Risk Analysis.

  2. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Martin, Curtis E.

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less

  3. Strain distribution in thin concrete pavement panels under three-point loading to failure with pre-pulse-pump Brillouin optical time domain analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Cain, John; Chen, Yizheng; Huang, Ying; Chen, Genda; Palek, Leonard

    2015-04-01

    Thin concrete panels reinforced with alloy polymer macro-synthetic fibers have recently been introduced to rapidly and cost-effectively improve the driving condition of existing roadways by laying down a fabric sheet on the roadways, casting a thin layer of concrete, and then cutting the layer into panels. This study is aimed to understand the strain distribution and potential crack development of concrete panels under three-point loading. To this end, six full-size 6ft×6ft×3in concrete panels were tested to failure in the laboratory. They were instrumented with three types of single-mode optical fiber sensors whose performance and ability to measure the strain distribution and detect cracks were compared. Each optical fiber sensor was spliced and calibrated, and then attached to a fabric sheet using adhesive. A thin layer of mortar (0.25 ~ 0.5 in thick) was cast on the fabric sheet. The three types of distributed sensors were bare SM-28e+ fiber, SM-28e+ fiber with a tight buffer, and concrete crack cable, respectively. The concrete crack cable consisted of one SM-28e+ optical fiber with a tight buffer, one SM-28e+ optical fiber with a loose buffer for temperature compensation, and an outside protective tight sheath. Distributed strains were collected from the three optical fiber sensors with pre-pulse-pump Brillouin optical time domain analysis in room temperature. Among the three sensors, the bare fiber was observed to be most fragile during construction and operation, but most sensitive to strain change or micro-cracks. The concrete crack cable was most rugged, but not as sensitive to micro-cracks and robust in micro-crack measurement as the bare fiber. The ruggedness and sensitivity of the fiber with a tight buffer were in between the bare fiber and the concrete crack cable. The strain distribution resulted from the three optical sensors are in good agreement, and can be applied to successfully locate cracks in the concrete panels. It was observed that the three types of fibers were functional until the concrete panels have experienced inelastic deformation, making the distributed strain sensing technology promising for real applications in pavement engineering.

  4. Azimuth-invariant mueller-matrix differentiation of the optical anisotropy of biological tissues

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu. F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2014-07-01

    A Mueller-matrix model is proposed for analysis of the optical anisotropy of protein networks of optically thin nondepolarizing layers of biological tissues with allowance for birefringence and dichroism. The model is used to construct algorithms for reconstruction of coordinate distributions of phase shifts and coefficient of linear dichroism. Objective criteria for differentiation of benign and malignant tissues of female genitals are formulated in the framework of the statistical analysis of such distributions. Approaches of evidence-based medicine are used to determine the working characteristics (sensitivity, specificity, and accuracy) of the Mueller-matrix method for the reconstruction of the parameters of optical anisotropy and show its efficiency in the differentiation of benign and malignant tumors.

  5. Formulation design space for stable, pH sensitive crystalline nifedipine nanoparticles.

    PubMed

    Jog, Rajan; Unachukwu, Kenechi; Burgess, Diane J

    2016-11-30

    Enteric coated formulations protect drugs from degrading in the harsh environment of the stomach (acidic pH and enzymes), and promotes drug delivery to and absorption into the duodenum and/or later parts of the intestine. Four DoE models were applied to optimize formulation parameters for the preparation of pH sensitive nifedipine nanoparticles. Stability studies were performed on the optimized formulations to monitor any possible variation in particle size distribution, homogeneity index, surface charge and drug release (pH 1.2 and pH 6.8). Stability studies were performed for 3 months at 4°C, 25°C and 40°C. A combination of Eudragit ® L 100-55 and polyvinyl alcohol was determined to be the most effective in stabilizing the nanoparticle suspension. The average particle size distribution, polydispersity index and surface charge of the optimized pH sensitive nifedipine nanoparticles were determined to be 131.86±8.21nm, 0.135±0.008 and -7.631±0.146mV, respectively. Following three months storage, it was observed that the formulations stored at 4°C were stable in terms of particle size distribution, polydispersity index, surface charge, drug loading and drug release, whereas those stored at 25°C and 40°C were relatively unstable. A predictive model to prepare stable pH sensitive nifedipine nanoparticles, was successfully developed using multiple linear regression analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  7. Particle size distributions and the vertical distribution of suspended matter in the upwelling region off Oregon

    NASA Technical Reports Server (NTRS)

    Kitchen, J. C.

    1977-01-01

    Various methods of presenting and mathematically describing particle size distribution are explained and evaluated. The hyperbolic distribution is found to be the most practical but the more complex characteristic vector analysis is the most sensitive to changes in the shape of the particle size distributions. A method for determining onshore-offshore flow patterns from the distribution of particulates was presented. A numerical model of the vertical structure of two size classes of particles was developed. The results show a close similarity to the observed distributions but overestimate the particle concentration by forty percent. This was attributed to ignoring grazing by zooplankton. Sensivity analyses showed the size preference was most responsive to the maximum specific growth rates and nutrient half saturation constants. The verical structure was highly dependent on the eddy diffusivity followed closely by the growth terms.

  8. USE OF WHOLE BODY CHEMICAL RESIDUE ANALYSIS AND LASER SCREENING CONFOCAL MICROSCOPY TO DESCRIBE DISTRIBUTION OF PBTS IN FISH EARLY LIFE STAGES

    EPA Science Inventory

    Fish early life stages (ELS) are more sensitive than juveniles or adults to many persistent bioaccumulative toxicants (PBTs). To better understand the mechanisms by which these chemicals produce toxicity during fish ELS, dose-response relationships need to be determined in relat...

  9. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  10. A sub-sampled approach to extremely low-dose STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A.; Luzi, L.; Yang, H.

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less

  11. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  12. Single-photon semiconductor photodiodes for distributed optical fiber sensors: state of the art and perspectives

    NASA Astrophysics Data System (ADS)

    Ripamonti, Giancarlo; Lacaita, Andrea L.

    1993-03-01

    The extreme sensitivity and time resolution of Geiger-mode avalanche photodiodes (GM- APDs) have already been exploited for optical time domain reflectometry (OTDR). Better than 1 cm spatial resolution in Rayleigh scattering detection was demonstrated. Distributed and quasi-distributed optical fiber sensors can take advantage of the capabilities of GM-APDs. Extensive studies have recently disclosed the main characteristics and limitations of silicon devices, both commercially available and developmental. In this paper we report an analysis of the performance of these detectors. The main characteristics of GM-APDs of interest for distributed optical fiber sensors are briefly reviewed. Command electronics (active quenching) is then introduced. The detector timing performance sets the maximum spatial resolution in experiments employing OTDR techniques. We highlight that the achievable time resolution depends on the physics of the avalanche spreading over the device area. On the basis of these results, trade-off between the important parameters (quantum efficiency, time resolution, background noise, and afterpulsing effects) is considered. Finally, we show first results on Germanium devices, capable of single photon sensitivity at 1.3 and 1.5 micrometers with sub- nanosecond time resolution.

  13. Proposed linear energy transfer areal detector for protons using radiochromic film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Rulon; Lin, Liyong; Fager, Marcus

    2015-04-15

    Radiation therapy depends on predictably and reliably delivering dose to tumors and sparing normal tissues. Protons with kinetic energy of a few hundred MeV can selectively deposit dose to deep seated tumors without an exit dose, unlike x-rays. The better dose distribution is attributed to a phenomenon known as the Bragg peak. The Bragg peak is due to relatively high energy deposition within a given distance or high Linear Energy Transfer (LET). In addition, biological response to radiation depends on the dose, dose rate, and localized energy deposition patterns or LET. At present, the LET can only be measured atmore » a given fixed point and the LET spatial distribution can only be inferred from calculations. The goal of this study is to develop and test a method to measure LET over extended areas. Traditionally, radiochromic films are used to measure dose distribution but not for LET distribution. We report the first use of these films for measuring the spatial distribution of the LET deposited by protons. The radiochromic film sensitivity diminishes for large LET. A mathematical model correlating the film sensitivity and LET is presented to justify relating LET and radiochromic film relative sensitivity. Protons were directed parallel to radiochromic film sandwiched between solid water slabs. This study proposes the scaled-normalized difference (SND) between the Treatment Planning system (TPS) and measured dose as the metric describing the LET. The SND is correlated with a Monte Carlo (MC) calculation of the LET spatial distribution for a large range of SNDs. A polynomial fit between the SND and MC LET is generated for protons having a single range of 20 cm with narrow Bragg peak. Coefficients from these fitted polynomial fits were applied to measured proton dose distributions with a variety of ranges. An identical procedure was applied to the protons deposited from Spread Out Bragg Peak and modulated by 5 cm. Gamma analysis is a method for comparing the calculated LET with the LET measured using radiochromic film at the pixel level over extended areas. Failure rates using gamma analysis are calculated for areas in the dose distribution using parameters of 25% of MC LET and 3 mm. The processed dose distributions find 5%–10% failure rates for the narrow 12.5 and 15 cm proton ranges and 10%–15% for proton ranges of 15, 17.5, and 20 cm and modulated by 5 cm. It is found through gamma analysis that the measured proton energy deposition in radiochromic film and TPS can be used to determine LET. This modified film dosimetry provides an experimental areal LET measurement that can verify MC calculations, support LET point measurements, possibly enhance biologically based proton treatment planning, and determine the polymerization process within the radiochromic film.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  15. Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel

    CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less

  16. Effects of naloxone distribution alone or in combination with addiction treatment with or without pre-exposure prophylaxis for HIV prevention in people who inject drugs: a cost-effectiveness modelling study.

    PubMed

    Uyei, Jennifer; Fiellin, David A; Buchelli, Marianne; Rodriguez-Santana, Ramon; Braithwaite, R Scott

    2017-03-01

    In the USA, an epidemic of opioid overdose deaths is occurring, many of which are from heroin. Combining naloxone distribution with linkage to addiction treatment or pre-exposure prophylaxis (PrEP) for HIV prevention through syringe service programmes has the potential to save lives and be cost-effective. We estimated the outcomes and cost-effectiveness of five alternative strategies: no additional intervention, naloxone distribution, naloxone distribution plus linkage to addiction treatment, naloxone distribution plus PrEP, and naloxone distribution plus linkage to addiction treatment and PrEP. We developed a decision analytical Markov model to simulate opioid overdose, HIV incidence, overdose-related deaths, and HIV-related deaths in people who inject drugs in Connecticut, USA. Model input parameters were derived from published sources. We compared each strategy with no intervention, as well as simultaneously considering all strategies. Sensitivity analysis was done for all variables. Linkage to addiction treatment was referral to an opioid treatment programme for methadone. Endpoints were survival, life expectancy, quality-adjusted life-years (QALYs), number and percentage of overdose deaths averted, number of HIV-related deaths averted, total costs (in 2015 US$) associated with each strategy, and incremental cost per QALY gained. In the base-case analysis, compared with no additional intervention, the naloxone distribution strategy yielded an incremental cost-effectiveness ratio (ICER) of $323 per QALY, and naloxone distribution plus linkage to addiction treatment was cost saving compared with no additional intervention (greater effectiveness and less expensive). The most efficient strategies (ie, those conferring the greatest health benefit for a particular budget) were naloxone distribution combined with linkage to addiction treatment (cost saving), and naloxone distribution combined with PrEP and linkage to addiction treatment (ICER $95 337 per QALY) at a willingness-to-pay threshold of $100 000. In probabilistic sensitivity analysis, the combination of naloxone distribution, PrEP, and linkage to addiction treatment was the optimal strategy in 37% of iterations and the combination of naloxone distribution and linkage to addiction treatment was the optimal strategy in 34% of iterations. Naloxone distribution through syringe service programmes is cost-effective compared with syringe distribution alone, but when combined with linkage to addiction treatment is cost saving compared with no additional services. A strategy that combines naloxone distribution, PrEP, and linkage to addiction treatment results in greater health benefits in people who inject drugs and is also cost-effective. State of Connecticut Department of Public Health and the National Institute of Mental Health. Copyright © 2017 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND license. Published by Elsevier Ltd.. All rights reserved.

  17. Augmenting aquatic species sensitivity distributions with interspecies toxicity estimation models

    EPA Science Inventory

    Species sensitivity distributions (SSD) are cumulative distribution functions of species toxicity values. The SSD approach is increasingly being used in ecological risk assessment, but is often limited by available toxicity data necessary for diverse species representation. In ...

  18. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  19. [Secretion analysis of pathogenic bacteria culture in 115 rural chronic nasal-sinusitis patients].

    PubMed

    Zhang, Xiaoyuan; Sun, Jingwu; Chu, Shu

    2014-05-01

    To investigate the bacteria distribution, drug bacterial sensitivity characteristics of the rural chronic rhinosinusitis (CRS). And to explore the effect of antibiotic on pathogenic bacteria culture. Choose nasal sinus secretions from 115 CRS patients living in rural areas. Aerobic bacteria culture, anaerobic bacteria culture and drug sensitive test were procedured for each sample. At the same time the use of antibiotics nearly 2 months and nearly 2 weeks were collected. Among one hundred and fifteen specimens, 17 kinds of germs were detected in 37 cases, the positive rate of aerobic bacteria was 32.17%. Staphylococcus aureus and epidermis staphylococcus aureus the most common type of aerobe in CRS patients at rural areas. There was negative result in the anaerobic bacteria culture of 17 maxillary sinus specimen. The cases of using antibiotics nearly 2 months was up to 90, accounting for 78.26%. Nearly 2 weeks, 73 cases, accounting for 63.48%. The chi-square analysis showed high bacterial culture rate, in chronic rhinosinusitis with nasal polyps (CRSwNP group), which revealed correlation between bacterial infection factors and nasal polyps formation. For CRS patients with positive result of bacterial culture, they were sensitive to ofloxacin, cefotaxime, organism, ciprofloxacin, magnitude cephalosporin, and were drug fast to penicillin G, ampicillin, erythromycin. No specific differences was found in the bacteria distribution of rural CRS. antibiotics abusage in rural CRS patients and the anaerobic bacteria culture techniques is the main factor resulting in low culture rate. Rational use of antimicrobial agents should be established on the basis of the bacterial culture and drug sensitive test.

  20. Effect of extreme data loss on heart rate signals quantified by entropy analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao

    2015-02-01

    The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.

  1. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    PubMed

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  2. Associations of water balance and thermal sensitivity of toads with macroclimatic characteristics of geographical distribution.

    PubMed

    Titon, Braz; Gomes, Fernando Ribeiro

    2017-06-01

    Interspecific variation in patterns of geographical distribution of phylogenetically related species of amphibians might be related to physiological adaptation to different climatic conditions. In this way, a comparative study of resistance to evaporative water loss, rehydration rates and sensitivity of locomotor performance to variations on hydration level and temperature was performed for five species of Bufonidae toads (Rhinella granulosa, R. jimi, R. ornata, R. schneideri and R. icterica) inhabiting different Brazilian biomes. The hypotheses tested were that, when compared to species inhabiting mesic environments, species living at hot and dry areas would show: (1) greater resistance to evaporative water loss, (2) higher rates of water uptake, (3) lower sensitivity of locomotor performance to dehydration and (4) lower sensitivity of locomotor performance at higher temperatures and higher sensitivity of locomotor performance at lower temperatures. This comparative analysis showed relations between body mass and interspecific variation in rehydration rates and resistance to evaporative water loss in opposite directions. These results might represent a functional compensation associated with relatively lower absorption areas in larger toads and higher evaporative areas in smaller ones. Moreover, species from the semi-arid Caatinga showed locomotor performance less sensitive to dehydration but highly affected by lower temperatures, as well greater resistance to evaporative water loss, when compared to the other species from the mesic Atlantic Forest and the savannah-like area called Cerrado. These results suggest adaptation patterns to environmental conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Jones matrix polarization-correlation mapping of biological crystals networks

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Vanchuliak, O.; Motrich, A. V.; Gorsky, M. P.; Meglinskiy, I.; Marchuk, Yu. F.

    2017-08-01

    It has been proposed the optical model of Jones-matrix description of mechanisms of optical anisotropy of polycrystalline films of human bile, namely optical activity and birefringence. The algorithm of reconstruction of distributions of parameters - optical rotation angles and phase shifts of the indicated anisotropy types has been elaborated. The objective criteria of differentiation of bile films taken from healthy donors and patients with cholelithiasis by means of statistic analysis of such distributions have been determined. The operational characteristics (sensitivity, specificity and accuracy) of Jones-matrix reconstruction method of optical anisotropy parameters were defined.

  4. Numerical simulations for quantitative analysis of electrostatic interaction between atomic force microscopy probe and an embedded electrode within a thin dielectric: meshing optimization, sensitivity to potential distribution and impact of cantilever contribution

    NASA Astrophysics Data System (ADS)

    Azib, M.; Baudoin, F.; Binaud, N.; Villeneuve-Faure, C.; Bugarin, F.; Segonds, S.; Teyssedre, G.

    2018-04-01

    Recent experimental results demonstrated that an electrostatic force distance curve (EFDC) can be used for space charge probing in thin dielectric layers. A main advantage of the method is claimed to be its sensitivity to charge localization, which, however, needs to be substantiated by numerical simulations. In this paper, we have developed a model which permits us to compute an EFDC accurately by using the most sophisticated and accurate geometry for the atomic force microscopy probe. To avoid simplifications and in order to reproduce experimental conditions, the EFDC has been simulated for a system constituted of a polarized electrode embedded in a thin dielectric layer (SiN x ). The individual contributions of forces on the tip and on the cantilever have been analyzed separately to account for possible artefacts. The EFDC sensitivity to potential distribution is studied through the change in electrode shape, namely the width and the depth. Finally, the numerical results have been compared with experimental data.

  5. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  6. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  7. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  8. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  9. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  10. Development of Species Sensitivity Distributions for Wildlife Using Interspecies Toxicity Correlation Models

    EPA Science Inventory

    Species sensitivity distributions (SSD) are cumulative distributions of chemical toxicity of multiple species and have had limited application in wildlife risk assessment because of relatively small datasets of wildlife toxicity values. Interspecies correlation estimation (ICE) m...

  11. Temperature Measurement and Damage Detection in Concrete Beams Exposed to Fire Using PPP-BOTDA Based Fiber Optic Sensors.

    PubMed

    Bao, Yi; Hoehler, Matthew S; Smith, Christopher M; Bundy, Matthew; Chen, Genda

    2017-10-01

    In this study, distributed fiber optic sensors based on pulse pre-pump Brillouin optical time domain analysis (PPP-BODTA) are characterized and deployed to measure spatially-distributed temperatures in reinforced concrete specimens exposed to fire. Four beams were tested to failure in a natural gas fueled compartment fire, each instrumented with one fused silica, single-mode optical fiber as a distributed sensor and four thermocouples. Prior to concrete cracking, the distributed temperature was validated at locations of the thermocouples by a relative difference of less than 9 %. The cracks in concrete can be identified as sharp peaks in the temperature distribution since the cracks are locally filled with hot air. Concrete cracking did not affect the sensitivity of the distributed sensor but concrete spalling broke the optical fiber loop required for PPP-BOTDA measurements.

  12. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    NASA Astrophysics Data System (ADS)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  13. Temporal Expression-based Analysis of Metabolism

    PubMed Central

    Segrè, Daniel

    2012-01-01

    Metabolic flux is frequently rerouted through cellular metabolism in response to dynamic changes in the intra- and extra-cellular environment. Capturing the mechanisms underlying these metabolic transitions in quantitative and predictive models is a prominent challenge in systems biology. Progress in this regard has been made by integrating high-throughput gene expression data into genome-scale stoichiometric models of metabolism. Here, we extend previous approaches to perform a Temporal Expression-based Analysis of Metabolism (TEAM). We apply TEAM to understanding the complex metabolic dynamics of the respiratorily versatile bacterium Shewanella oneidensis grown under aerobic, lactate-limited conditions. TEAM predicts temporal metabolic flux distributions using time-series gene expression data. Increased predictive power is achieved by supplementing these data with a large reference compendium of gene expression, which allows us to take into account the unique character of the distribution of expression of each individual gene. We further propose a straightforward method for studying the sensitivity of TEAM to changes in its fundamental free threshold parameter θ, and reveal that discrete zones of distinct metabolic behavior arise as this parameter is changed. By comparing the qualitative characteristics of these zones to additional experimental data, we are able to constrain the range of θ to a small, well-defined interval. In parallel, the sensitivity analysis reveals the inherently difficult nature of dynamic metabolic flux modeling: small errors early in the simulation propagate to relatively large changes later in the simulation. We expect that handling such “history-dependent” sensitivities will be a major challenge in the future development of dynamic metabolic-modeling techniques. PMID:23209390

  14. Sensitivity Analysis and Simulation of Theoretical Response of Ceramics to Strong Magnetic Fields

    DTIC Science & Technology

    2016-09-01

    Weapons and Materials Research Directorate, ARL Approved for public release; distribution is unlimited. FOR OFFICIAL USE ONLY...Compounds. 2013;551:568–577. 4. Terada N, Suzuki HS, Suzuki TS, Kitazawa H, Sakka Y, Kaneko K, Metoki N. In situ neutron diffraction study of...TS, Kitazawa H, Sakka Y, Kaneko K, Metoki N. Neutron diffraction texture analysis for alpha-Al2O3 oriented by a high magnetic field and sintering

  15. Exploratory Modeling and the use of Simulation for Policy Analysis

    DTIC Science & Technology

    1992-01-01

    and the Use of Simulation for Policy Analysis Steven C. Barikes Prepared for the United States Army R A N D Approved for public release; distribution...Research, Vol. 39, No. 3, May-June 1991, pp. 355-365. Lipton, Richard J ., Thomas G. Marr, and J . Douglas Welsh, "Computational Approaches to Discovering...the Visual Cortex, John Wiley & Sons, New York, 1985. / -30- Rothenberg, J ., N. Z. Shapiro, and C. Hefley, "A Propagative’ Approach to Sensitivity

  16. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  17. Sub-grid scale precipitation in ALCMs: re-assessing the land surface sensitivity using a single column model

    NASA Astrophysics Data System (ADS)

    Pitman, Andrew J.; Yang, Zong-Liang; Henderson-Sellers, Ann

    1993-10-01

    The sensitivity of a land surface scheme to the distribution of precipitation within a general circulation model's grid element is investigated. Earlier experiments which showed considerable sensitivity of the runoff and evaporation simulation to the distribution of precipitation are repeated in the light of other results which show no sensitivity of evaporation to the distribution of precipitation. Results show that while the earlier results over-estimated the sensitivity of the surface hydrology to the precipitation distribution, the general conclusion that the system is sensitive is supported. It is found that changing the distribution of precipitation from falling over 100% of the grid square to falling over 10% leads to a reduction in evaporation from 1578 mm y-1 to 1195 mm y -1 while runoff increases from 278 mm y-1 to 602 mm y-1. The sensitivity is explained in terms of evaporation being dominated by available energy when precipitation falls over nearly the entire grid square, but by moisture availability (mainly intercepted water) when it falls over little of the grid square. These results also indicate that earlier work using stand-alone forcing to drive land surface schemes ‘off-line’, and to investigate the sensitivity of land surface codes to various parameters, leads to results which are non-repeatable in single column simulations.

  18. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  19. Total scattering and pair distribution function analysis in modelling disorder in PZN (PbZn1/3Nb2/3O3)

    PubMed Central

    Whitfield, Ross E.; Goossens, Darren J.; Welberry, T. Richard

    2016-01-01

    The ability of the pair distribution function (PDF) analysis of total scattering (TS) from a powder to determine the local ordering in ferroelectric PZN (PbZn1/3Nb2/3O3) has been explored by comparison with a model established using single-crystal diffuse scattering (SCDS). While X-ray PDF analysis is discussed, the focus is on neutron diffraction results because of the greater extent of the data and the sensitivity of the neutron to oxygen atoms, the behaviour of which is important in PZN. The PDF was shown to be sensitive to many effects not apparent in the average crystal structure, including variations in the B-site—O separation distances and the fact that 〈110〉 Pb2+ displacements are most likely. A qualitative comparison between SCDS and the PDF shows that some features apparent in SCDS were not apparent in the PDF. These tended to pertain to short-range correlations in the structure, rather than to interatomic separations. For example, in SCDS the short-range alternation of the B-site cations was quite apparent in diffuse scattering at (½ ½ ½), whereas it was not apparent in the PDF. PMID:26870378

  20. Analysis of the Perceived Adequacy of Air Force Civil Engineering Prime Beef Training

    DTIC Science & Technology

    1985-09-01

    P.9 0); - I 6Q ANALYSIS OF THE PERCEIUED ADEOUACY OF AIR FORCE CIUIL ENGINEER!NG PRIME BEEF TRAINING THESIS William C. Morris Captain, USAF AFIT...Ohio TIWA docm=ent La 1;;, Ow psl• c relm’,-j and owme;d _! i8U0- ________IS____rft* 85 11 05 04 AFIT/SEM/DET/BE ANALYSIS OF THE PERCEIVED ADEQUACY...distribution unlimited L * The contents of the document are technically accurate, and 6 no sensitive items, detrimental ideas, or deleterious information are

  1. Influences of geological parameters to probabilistic assessment of slope stability of embankment

    NASA Astrophysics Data System (ADS)

    Nguyen, Qui T.; Le, Tuan D.; Konečný, Petr

    2018-04-01

    This article considers influences of geological parameters to slope stability of the embankment in probabilistic analysis using SLOPE/W computational system. Stability of a simple slope is evaluated with and without pore–water pressure on the basis of variation of soil properties. Normal distributions of unit weight, cohesion and internal friction angle are assumed. Monte Carlo simulation technique is employed to perform analysis of critical slip surface. Sensitivity analysis is performed to observe the variation of the geological parameters and their effects on safety factors of the slope stability.

  2. Development of aquatic toxicity benchmarks for oil products using species sensitivity distributions

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to spilled oil and chemically dispersed oil continues to be a significant challenge in spill response and impact assessment. We used standardized tests from the literature to develop species sensitivity distributions (SSDs) of...

  3. U.S. Port Development and the Expanding World Coal Trade: A Study of Alternatives.

    DTIC Science & Technology

    1982-06-01

    Dredging Program . . ... 70 4. Growth Potential Index . . . . . . . . . . 71 B. SENSITIVITY ANALYSIS . . . . . . . . . . .. . 74 1. Dredging Effect... PROGRAM TO ’:OMPUTE COST AND COAL CAPACITIES ............ .. 95 LIST OPREFRENCES .. . .. .. . . 999 INITIAL DISTRIBUTION LIST .... ....... .. 102 7 LIST...Deepuater Terminal Evaluation Summary ...... 64 Vi. Coal Export Capacities by Port ......... 68 VII. Optimal and Next Best Programs for Various

  4. ONR Europe Reports. Computer Science/Computer Engineering in Central Europe: A Report on Czechoslovakia, Hungary, and Poland

    DTIC Science & Technology

    1992-08-01

    Rychlik J.: Simulation of distributed control systems. Research report of Institute of Technology in 22 Pilsen no. 209-07-85, Jun. 1985 Kocur P... Kocur P.: Sensitivity analysis of reliability parameters. Proceedings of conf. FTSD, Brno, Jun. 1986, pp. 97-101 Smrha P., Kocur P., Racek S.: A

  5. Evaluation of Swine-Specific PCR Assays Used for Fecal Source Tracking and Analysis of Molecular Diversity of Swine-Specific "Bacteroidales" Populations

    EPA Science Inventory

    In this study we evaluated specificity, distribution, and sensitivity of Bacteroidales – (PF163 and PigBac1) and methanogen-based (P23-2) assays proposed to detect swine fecal pollution in environmental waters. The assays were tested against 220 fecal DNA extracts derived from t...

  6. A quantitative sensitivity analysis on the behaviour of common thermal indices under hot and windy conditions in Doha, Qatar

    NASA Astrophysics Data System (ADS)

    Fröhlich, Dominik; Matzarakis, Andreas

    2016-04-01

    Human thermal perception is best described through thermal indices. The most popular thermal indices applied in human bioclimatology are the perceived temperature (PT), the Universal Thermal Climate Index (UTCI), and the physiologically equivalent temperature (PET). They are analysed focusing on their sensitivity to single meteorological input parameters under the hot and windy meteorological conditions observed in Doha, Qatar. It can be noted, that the results for the three indices are distributed quite differently. Furthermore, they respond quite differently to modifications in the input conditions. All of them show particular limitations and shortcomings that have to be considered and discussed. While the results for PT are unevenly distributed, UTCI shows limitations concerning the input data accepted. PET seems to respond insufficiently to changes in vapour pressure. The indices should therefore be improved to be valid for several kinds of climates.

  7. Sensitivity of soil moisture initialization for decadal predictions under different regional climatic conditions in Europe

    NASA Astrophysics Data System (ADS)

    Khodayar, S.; Sehlinger, A.; Feldmann, H.; Kottmeier, C.

    2015-12-01

    The impact of soil initialization is investigated through perturbation simulations with the regional climate model COSMO-CLM. The focus of the investigation is to assess the sensitivity of simulated extreme periods, dry and wet, to soil moisture initialization in different climatic regions over Europe and to establish the necessary spin up time within the framework of decadal predictions for these regions. Sensitivity experiments consisted of a reference simulation from 1968 to 1999 and 5 simulations from 1972 to 1983. The Effective Drought Index (EDI) is used to select and quantify drought status in the reference run to establish the simulation time period for the sensitivity experiments. Different soil initialization procedures are investigated. The sensitivity of the decadal predictions to soil moisture initial conditions is investigated through the analysis of water cycle components' (WCC) variability. In an episodic time scale the local effects of soil moisture on the boundary-layer and the propagated effects on the large-scale dynamics are analysed. The results show: (a) COSMO-CLM reproduces the observed features of the drought index. (b) Soil moisture initialization exerts a relevant impact on WCC, e.g., precipitation distribution and intensity. (c) Regional characteristics strongly impact the response of the WCC. Precipitation and evapotranspiration deviations are larger for humid regions. (d) The initial soil conditions (wet/dry), the regional characteristics (humid/dry) and the annual period (wet/dry) play a key role in the time that soil needs to restore quasi-equilibrium and the impact on the atmospheric conditions. Humid areas, and for all regions, a humid initialization, exhibit shorter spin up times, also soil reacts more sensitive when initialised during dry periods. (e) The initial soil perturbation may markedly modify atmospheric pressure field, wind circulation systems and atmospheric water vapour distribution affecting atmospheric stability conditions, thus modifying precipitation intensity and distribution even several years after the initialization.

  8. Precision measurement of the mass and width of the W boson at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, Sarah Alam

    2009-09-01

    A precision measurement of the mass and width of the W boson is presented. The W bosons are produced in proton antiproton collisions occurring at a centre of mass energy of 1.96 TeV at the Tevatron accelerator. The data used for the analyses is collected by the Collider Detector at Fermilab (CDF) and corresponds to an average integrated luminosity of 350 pb -1 for the W width analysis for the electron and muon channels and an average integrated luminosity of 2350 pb -1 for the W mass analysis. The mass and width of the W boson is extracted by fittingmore » to the transverse mass distribution, with the peak of the distribution being most sensitive to the mass and the tail of the distribution sensitive to the width. The W width measurement in the electron and muon channels is combined to give a final result of 2032 ± 73 MeV. The systematic uncertainty on the W mass from the recoil of the W boson against the initial state gluon radiation is discussed. A systematic study of the recoil in Z → e +e - events where one electron is reconstructed in the central calorimeter and the other in the plug calorimeter and its effect on the W mass is presented for the first time in this thesis.« less

  9. A note on `a replenishment policy for items with price-dependent demand, time-proportional deterioration and no shortages'

    NASA Astrophysics Data System (ADS)

    Shah, Nita H.; Soni, Hardik N.; Gupta, Jyoti

    2014-08-01

    In a recent paper, Begum et al. (2012, International Journal of Systems Science, 43, 903-910) established pricing and replenishment policy for an inventory system with price-sensitive demand rate, time-proportional deterioration rate which follows three parameters, Weibull distribution and no shortages. In their model formulation, it is observed that the retailer's stock level reaches zero before the deterioration occurs. Consequently, the model resulted in traditional inventory model with price sensitive demand rate and no shortages. Hence, the main purpose of this note is to modify and present complete model formulation for Begum et al. (2012). The proposed model is validated by a numerical example and the sensitivity analysis of parameters is carried out.

  10. The direct analysis of drug distribution of rotigotine-loaded microspheres from tissue sections by LESA coupled with tandem mass spectrometry.

    PubMed

    Xu, Li-Xiao; Wang, Tian-Tian; Geng, Yin-Yin; Wang, Wen-Yan; Li, Yin; Duan, Xiao-Kun; Xu, Bin; Liu, Charles C; Liu, Wan-Hui

    2017-09-01

    The direct analysis of drug distribution of rotigotine-loaded microspheres (RoMS) from tissue sections by liquid extraction surface analysis (LESA) coupled with tandem mass spectrometry (MS/MS) was demonstrated. The RoMS distribution in rat tissues assessed by the ambient LESA-MS/MS approach without extensive or tedious sample pretreatment was compared with that obtained by a conventional liquid chromatography tandem mass spectrometry (LC-MS/MS) method in which organ excision and subsequent solvent extraction were commonly employed before analysis. Results obtained from the two were well correlated for a majority of the organs, such as muscle, liver, stomach, and hippocampus. The distribution of RoMS in the brain, however, was found to be mainly focused in the hippocampus and striatum regions as shown by the LESA-imaged profiles. The LESA approach we developed is sensitive enough, with an estimated LLOQ at 0.05 ng/mL of rotigotine in brain tissue, and information-rich with minimal sample preparation, suitable, and promising in assisting the development of new drug delivery systems for controlled drug release and protection. Graphical abstract Workflow for the LESA-MS/MS imaging of brain tissue section after intramuscular RoMS administration.

  11. Distribution of siderophile and other trace elements in melt rock at the Chicxulub impact structure

    NASA Technical Reports Server (NTRS)

    Schuraytz, B. C.; Lindstrom, D. J.; Martinez, R. R.; Sharpton, V. L.; Marin, L. E.

    1994-01-01

    Recent isotopic and mineralogical studies have demonstrated a temporal and chemical link between the Chicxulub multiring impact basin and ejecta at the Cretaceous-Tertiary boundary. A fundamental problem yet to be resolved, however, is identification of the projectile responsible for this cataclysmic event. Drill core samples of impact melt rock from the Chichxulub structure contain Ir and Os abundances and Re-Os isotopic ratios indicating the presence of up to approx. 3 percent meteoritic material. We have used a technique involving microdrilling and high sensitivity instrumental neutron activation analysis (INAA) in conjunction with electron microprobe analysis to characterize further the distribution of siderophile and other trace elements among phases within the C1-N10 melt rock.

  12. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  13. Conformations and charge distributions of diazocyclopropanes

    NASA Astrophysics Data System (ADS)

    Borges, Itamar, Jr.

    Three diazo-substituted cyclopropane compounds, which have been suggested as new potential high energy compounds, were studied employing the B3LYP-DFT/6-31G(d,p) method. Geometries were optimized. Distributed multipole analysis, computed from the B3LYP-DFT/6-31G(d,p) density matrix, was used to describe the details of the molecular charge distribution of the three molecules. It was verified that electron withdrawing from the C ring atoms and charge build-up on the N atoms bonded to the ring increased with the number of diazo groups. These effects were related to increased sensitivity to impact and easiness of C bond N bond breaking in the three compounds.

  14. Understanding the effect of hammering process on the vibration characteristics of cymbals

    NASA Astrophysics Data System (ADS)

    Kuratani, F.; Yoshida, T.; Koide, T.; Mizuta, T.; Osamura, K.

    2016-09-01

    Cymbals are thin domed plates used as percussion instruments. When cymbals are struck, they vibrate and radiate sound. Cymbals are made through spin forming, hammering, and lathing. The spin forming creates the basic shape of the cymbal, which determines its basic vibration characteristics. The hammering and lathing produce specific sound adjustments by changing the cymbal's vibration characteristics. In this study, we study how hammering cymbals affects their vibration characteristics. The hammering produces plastic deformation (small, shallow dents) on the cymbal's surface, generating residual stresses throughout it. These residual stresses change the vibration characteristics. We perform finite element analysis of a cymbal to obtain its stress distribution and the resulting change in vibration characteristics. To reproduce the stress distribution, we use thermal stress analysis, and then with this stress distribution we perform vibration analysis. These results show that each of the cymbal's modes has a different sensitivity to the thermal load (i.e., hammering). This difference causes changes in the frequency response and the deflection shape that significantly improves the sound radiation efficiency. In addition, we explain the changes in natural frequencies by the stress and modal strain energy distributions.

  15. Utility of bromide and heat tracers for aquifer characterization affected by highly transient flow conditions

    NASA Astrophysics Data System (ADS)

    Ma, Rui; Zheng, Chunmiao; Zachara, John M.; Tonkin, Matthew

    2012-08-01

    A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heat plume movement. Moreover, the temperature data contained "noise" caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.

  16. Utility of Bromide and Heat Tracers for Aquifer Characterization Affected by Highly Transient Flow Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Rui; Zheng, Chunmiao; Zachara, John M.

    A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heatmore » plume movement. Moreover, the temperature data contained “noise” caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.« less

  17. Predicting potential global distributions of two Miscanthus grasses: implications for horticulture, biofuel production, and biological invasions.

    PubMed

    Hager, Heather A; Sinasac, Sarah E; Gedalof, Ze'ev; Newman, Jonathan A

    2014-01-01

    In many regions, large proportions of the naturalized and invasive non-native floras were originally introduced deliberately by humans. Pest risk assessments are now used in many jurisdictions to regulate the importation of species and usually include an estimation of the potential distribution in the import area. Two species of Asian grass (Miscanthus sacchariflorus and M. sinensis) that were originally introduced to North America as ornamental plants have since escaped cultivation. These species and their hybrid offspring are now receiving attention for large-scale production as biofuel crops in North America and elsewhere. We evaluated their potential global climate suitability for cultivation and potential invasion using the niche model CLIMEX and evaluated the models' sensitivity to the parameter values. We then compared the sensitivity of projections of future climatically suitable area under two climate models and two emissions scenarios. The models indicate that the species have been introduced to most of the potential global climatically suitable areas in the northern but not the southern hemisphere. The more narrowly distributed species (M. sacchariflorus) is more sensitive to changes in model parameters, which could have implications for modelling species of conservation concern. Climate projections indicate likely contractions in potential range in the south, but expansions in the north, particularly in introduced areas where biomass production trials are under way. Climate sensitivity analysis shows that projections differ more between the selected climate change models than between the selected emissions scenarios. Local-scale assessments are required to overlay suitable habitat with climate projections to estimate areas of cultivation potential and invasion risk.

  18. Spectroscopic analysis of autofluorescence distribution in digestive organ for unstained metabolism-based tumor detection

    NASA Astrophysics Data System (ADS)

    Arimoto, Hidenobu; Iwata, Atsushi; Kagawa, Keiichiro; Sanomura, Yoji; Yoshida, Shigeto; Kawahito, Shoji; Tanaka, Shinji

    2017-02-01

    Auto fluorescence distribution of coenzymes NADH and FAD is investigated for the unstained tumor detection using an [?] originally designed confocal spectroscope. The tumor region in digestive organ can be determined by evaluating the redox index which is defined as the raio of NADH and FAD concentration. However, the redox index is largely influenced by the presence of collagen in the submucosal layer because its auto fluorescence spectrum overlaps considerably with that of NADH. Therefore, it is necessary to know in advance the distribution of NADH, FAD, and collagen in the mucosal layer. The purpose of our study is to investigate the vertical distribution of the redox index in tissue using depth-sensitive auto fluorescence spectroscopy. The experimental procedure and the results are presented.

  19. The Multidimensional Efficiency of Pension System: Definition and Measurement in Cross-Country Studies.

    PubMed

    Chybalski, Filip

    The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.

  20. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  1. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  2. Surrogacy of progression-free survival (PFS) for overall survival (OS) in esophageal cancer trials with preoperative therapy: Literature-based meta-analysis.

    PubMed

    Kataoka, K; Nakamura, K; Mizusawa, J; Kato, K; Eba, J; Katayama, H; Shibata, T; Fukuda, H

    2017-10-01

    There have been no reports evaluating progression-free survival (PFS) as a surrogate endpoint in resectable esophageal cancer. This study was conducted to evaluate the trial level correlations between PFS and overall survival (OS) in resectable esophageal cancer with preoperative therapy and to explore the potential benefit of PFS as a surrogate endpoint for OS. A systematic literature search of randomized trials with preoperative chemotherapy or preoperative chemoradiotherapy for esophageal cancer reported from January 1990 to September 2014 was conducted using PubMed and the Cochrane Library. Weighted linear regression using sample size of each trial as a weight was used to estimate coefficient of determination (R 2 ) within PFS and OS. The primary analysis included trials in which the HR for both PFS and OS was reported. The sensitivity analysis included trials in which either HR or median survival time of PFS and OS was reported. In the sensitivity analysis, HR was estimated from the median survival time of PFS and OS, assuming exponential distribution. Of 614 articles, 10 trials were selected for the primary analysis and 15 for the sensitivity analysis. The primary analysis did not show a correlation between treatment effects on PFS and OS (R 2 0.283, 95% CI [0.00-0.90]). The sensitivity analysis did not show an association between PFS and OS (R 2 0.084, 95% CI [0.00-0.70]). Although the number of randomized controlled trials evaluating preoperative therapy for esophageal cancer is limited at the moment, PFS is not suitable for primary endpoint as a surrogate endpoint for OS. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  3. Methylation-sensitive amplified polymorphism-based genome-wide analysis of cytosine methylation profiles in Nicotiana tabacum cultivars.

    PubMed

    Jiao, J; Wu, J; Lv, Z; Sun, C; Gao, L; Yan, X; Cui, L; Tang, Z; Yan, B; Jia, Y

    2015-11-26

    This study aimed to investigate cytosine methylation profiles in different tobacco (Nicotiana tabacum) cultivars grown in China. Methylation-sensitive amplified polymorphism was used to analyze genome-wide global methylation profiles in four tobacco cultivars (Yunyan 85, NC89, K326, and Yunyan 87). Amplicons with methylated C motifs were cloned by reamplified polymerase chain reaction, sequenced, and analyzed. The results show that geographical location had a greater effect on methylation patterns in the tobacco genome than did sampling time. Analysis of the CG dinucleotide distribution in methylation-sensitive polymorphic restriction fragments suggested that a CpG dinucleotide cluster-enriched area is a possible site of cytosine methylation in the tobacco genome. The sequence alignments of the Nia1 gene (that encodes nitrate reductase) in Yunyan 87 in different regions indicate that a C-T transition might be responsible for the tobacco phenotype. T-C nucleotide replacement might also be responsible for the tobacco phenotype and may be influenced by geographical location.

  4. An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.

    PubMed

    Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K

    2007-08-01

    To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.

  5. A sensitivity analysis of volcanic aerosol dispersion in the stratosphere. [Mt. Fuego, Guatemala eruptions

    NASA Technical Reports Server (NTRS)

    Butler, C. F.

    1979-01-01

    A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.

  6. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  7. 13C labeling analysis of sugars by high resolution-mass spectrometry for metabolic flux analysis.

    PubMed

    Acket, Sébastien; Degournay, Anthony; Merlier, Franck; Thomasset, Brigitte

    2017-06-15

    Metabolic flux analysis is particularly complex in plant cells because of highly compartmented metabolism. Analysis of free sugars is interesting because it provides data to define fluxes around hexose, pentose, and triose phosphate pools in different compartment. In this work, we present a method to analyze the isotopomer distribution of free sugars labeled with carbon 13 using a liquid chromatography-high resolution mass spectrometry, without derivatized procedure, adapted for Metabolic flux analysis. Our results showed a good sensitivity, reproducibility and better accuracy to determine isotopic enrichments of free sugars compared to our previous methods [5, 6]. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    PubMed

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.

  9. Comparison of Species Sensitivity Distributions Derived from Interspecies Correlation Models to Distributions used to Derive Water Quality Criteria

    EPA Science Inventory

    Species sensitivity distributions (SSD) require a large number of measured toxicity values to define a chemical’s toxicity to multiple species. This investigation comprehensively evaluated the accuracy of SSDs generated from toxicity values predicted from interspecies correlation...

  10. Sensitivity of Podosphaera aphanis isolates to DMI fungicides: distribution and reduced cross-sensitivity.

    PubMed

    Sombardier, Audrey; Dufour, Marie-Cécile; Blancard, Dominique; Corio-Costet, Marie-France

    2010-01-01

    Management of strawberry powdery mildew, Podopshaera aphanis (Wallr.), requires numerous fungicide treatments. Limiting epidemics is heavily dependent on sterol demethylation inhibitors (DMIs) such as myclobutanil or penconazole. Recently, a noticeable reduction in the efficacy of these triazole fungicides was reported by strawberry growers in France. The goal of this study was to investigate the state of DMI sensitivity of French P. aphanis and provide tools for improved pest management. Using leaf disc sporulation assays, sensitivity to myclobutanil and penconazole of 23 isolates of P. aphanis was monitored. Myclobutanil EC(50) ranged from less than 0.1 to 14.67 mg L(-1) and for penconazole from 0.04 to 4.2 mg L(-1). A cross-analysis and a Venn diagram showed that there was reduced sensitivity and a positive correlation between the less sensitive myclobutanil and penconazole isolates; 73.9% of isolates were less sensitive to a DMI and 47.8% exhibited less sensitivity to both fungicides. The results show that sensitivity to myclobutanil and, to a lesser extent, penconazole has become less efficient in strawberry powdery mildew in France. Therefore, urgent action is required in order to document its appearance and optimise methods of control.

  11. Immuno-magnetic beads-based extraction-capillary zone electrophoresis-deep UV laser-induced fluorescence analysis of erythropoietin.

    PubMed

    Wang, Heye; Dou, Peng; Lü, Chenchen; Liu, Zhen

    2012-07-13

    Erythropoietin (EPO) is an important glycoprotein hormone. Recombinant human EPO (rhEPO) is an important therapeutic drug and can be also used as doping reagent in sports. The analysis of EPO glycoforms in pharmaceutical and sports areas greatly challenges analytical scientists from several aspects, among which sensitive detection and effective and facile sample preparation are two essential issues. Herein, we investigated new possibilities for these two aspects. Deep UV laser-induced fluorescence detection (deep UV-LIF) was established to detect the intrinsic fluorescence of EPO while an immuno-magnetic beads-based extraction (IMBE) was developed to specifically extract EPO glycoforms. Combined with capillary zone electrophoresis (CZE), CZE-deep UV-LIF allows high resolution glycoform profiling with improved sensitivity. The detection sensitivity was improved by one order of magnitude as compared with UV absorbance detection. An additional advantage is that the original glycoform distribution can be completely preserved because no fluorescent labeling is needed. By combining IMBE with CZE-deep UV-LIF, the overall detection sensitivity was 1.5 × 10⁻⁸ mol/L, which was enhanced by two orders of magnitude relative to conventional CZE with UV absorbance detection. It is applicable to the analysis of pharmaceutical preparations of EPO, but the sensitivity is insufficient for the anti-doping analysis of EPO in blood and urine. IMBE can be straightforward and effective approach for sample preparation. However, antibodies with high specificity were the key for application to urine samples because some urinary proteins can severely interfere the immuno-extraction. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  13. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  14. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  15. Validation of a particle tracking analysis method for the size determination of nano- and microparticles

    NASA Astrophysics Data System (ADS)

    Kestens, Vikram; Bozatzidis, Vassili; De Temmerman, Pieter-Jan; Ramaye, Yannic; Roebben, Gert

    2017-08-01

    Particle tracking analysis (PTA) is an emerging technique suitable for size analysis of particles with external dimensions in the nano- and sub-micrometre scale range. Only limited attempts have so far been made to investigate and quantify the performance of the PTA method for particle size analysis. This article presents the results of a validation study during which selected colloidal silica and polystyrene latex reference materials with particle sizes in the range of 20 nm to 200 nm were analysed with NS500 and LM10-HSBF NanoSight instruments and video analysis software NTA 2.3 and NTA 3.0. Key performance characteristics such as working range, linearity, limit of detection, limit of quantification, sensitivity, robustness, precision and trueness were examined according to recommendations proposed by EURACHEM. A model for measurement uncertainty estimation following the principles described in ISO/IEC Guide 98-3 was used for quantifying random and systematic variations. For nominal 50 nm and 100 nm polystyrene and a nominal 80 nm silica reference materials, the relative expanded measurement uncertainties for the three measurands of interest, being the mode, median and arithmetic mean of the number-weighted particle size distribution, varied from about 10% to 12%. For the nominal 50 nm polystyrene material, the relative expanded uncertainty of the arithmetic mean of the particle size distributions increased up to 18% which was due to the presence of agglomerates. Data analysis was performed with software NTA 2.3 and NTA 3.0. The latter showed to be superior in terms of sensitivity and resolution.

  16. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  17. Temperature measurement and damage detection in concrete beams exposed to fire using PPP-BOTDA based fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Hoehler, Matthew S.; Smith, Christopher M.; Bundy, Matthew; Chen, Genda

    2017-10-01

    In this study, Brillouin scattering-based distributed fiber optic sensor is implemented to measure temperature distributions and detect cracks in concrete structures subjected to fire for the first time. A telecommunication-grade optical fiber is characterized as a high temperature sensor with pulse pre-pump Brillouin optical time domain analysis (PPP-BODTA), and implemented to measure spatially-distributed temperatures in reinforced concrete beams in fire. Four beams were tested to failure in a natural gas fueled compartment fire, each instrumented with one fused silica, single-mode optical fiber as a distributed sensor and four thermocouples. Prior to concrete cracking, the distributed temperature was validated at locations of the thermocouples by a relative difference of less than 9%. The cracks in concrete can be identified as sharp peaks in the temperature distribution since the cracks are locally filled with hot air. Concrete cracking did not affect the sensitivity of the distributed sensor but concrete spalling broke the optical fiber loop required for PPP-BOTDA measurements.

  18. Descriptive analysis of the masticatory and salivary functions and gustatory sensitivity in healthy children.

    PubMed

    Marquezin, Maria Carolina Salomé; Pedroni-Pereira, Aline; Araujo, Darlle Santos; Rosar, João Vicente; Barbosa, Taís S; Castelo, Paula Midori

    2016-08-01

    The objective of this study is to better understand salivary and masticatory characteristics, this study evaluated the relationship among salivary parameters, bite force (BF), masticatory performance (MP) and gustatory sensitivity in healthy children. The secondary outcome was to evaluate possible gender differences. One hundred and sixteen eutrophic subjects aged 7-11 years old were evaluated, caries-free and with no definite need of orthodontic treatment. Salivary flow rate and pH, total protein (TP), alpha-amylase (AMY), calcium (CA) and phosphate (PHO) concentrations were determined in stimulated (SS) and unstimulated saliva (US). BF and MP were evaluated using digital gnathodynamometer and fractional sieving method, respectively. Gustatory sensitivity was determined by detecting the four primary tastes (sweet, salty, sour and bitter) in three different concentrations. Data were evaluated using descriptive statistics, Mann-Whitney/t-test, Spearman correlation and multiple regression analysis, considering α = 0.05. Significant positive correlation between taste and age was observed. CA and PHO concentrations correlated negatively with salivary flow and pH; sweet taste scores correlated with AMY concentrations and bitter taste sensitivity correlated with US flow rate (p < 0.05). No significant difference between genders in salivary, masticatory characteristics and gustatory sensitivity was observed. The regression analysis showed a weak relationship between the distribution of chewed particles among the different sieves and BF. The concentration of some analytes was influenced by salivary flow and pH. Age, saliva flow and AMY concentrations influenced gustatory sensitivity. In addition, salivary, masticatory and taste characteristics did not differ between genders, and only a weak relation between MP and BF was observed.

  19. Allergenicity and cross-reactivity of booklice (Liposcelis bostrichophila): a common household insect pest in Japan.

    PubMed

    Fukutomi, Yuma; Kawakami, Yuji; Taniguchi, Masami; Saito, Akemi; Fukuda, Azumi; Yasueda, Hiroshi; Nakazawa, Takuya; Hasegawa, Maki; Nakamura, Hiroyuki; Akiyama, Kazuo

    2012-01-01

    Booklice (Liposcelis bostrichophila) are a common household insect pest distributed worldwide. Particularly in Japan, they infest 'tatami' mats and are the most frequently detected insect among all detectable insects, present at a frequency of about 90% in dust samples. Although it has been hypothesized that they are an important indoor allergen, studies on their allergenicity have been limited. To clarify the allergenicity of booklice and the cross-reactivity of this insect allergen with allergens of other insects, patients sensitized to booklice were identified from 185 Japanese adults with allergic asthma using skin tests and IgE-ELISA. IgE-inhibition analysis, immunoblotting and immunoblotting-inhibition analysis were performed using sera from these patients. Allergenic proteins contributing to specific sensitization to booklice were identified by two-dimensional electrophoresis and two-dimensional immunoblotting. The booklouse-specific IgE antibody was detected in sera from 41 patients (22% of studied patients). IgE inhibition analysis revealed that IgE reactivity to the booklouse allergen in the sera from one third of booklouse-sensitized patients was not inhibited by preincubation with extracts from any other environmental insects in this study. Immunoblotting identified a 26-kD protein from booklouse extract as the allergenic protein contributing to specific sensitization to booklice. The amino acid sequence of peptide fragments of this protein showed no homology to those of previously described allergenic proteins, indicating that this protein is a new allergen. Sensitization to booklice was relatively common and specific sensitization to this insect not related to insect panallergy was indicated in this population. Copyright © 2011 S. Karger AG, Basel.

  20. Distributed representations in memory: Insights from functional brain imaging

    PubMed Central

    Rissman, Jesse; Wagner, Anthony D.

    2015-01-01

    Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly leveraged powerful analytical tools (e.g., multi-voxel pattern analysis) to decode the information represented within distributed fMRI activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content, and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses. PMID:21943171

  1. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  2. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  3. Comparative analysis of sixteen flavonoids from different parts of Sophora flavescens Ait. by ultra high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Weng, Zebin; Zeng, Fei; Zhu, Zhenhua; Qian, Dawei; Guo, Sheng; Wang, Hanqing; Duan, Jin-Ao

    2018-07-15

    The root of Sophora flavescens Ait. has long been used as a crude drug in China and other Asian countries for thousands of years. The quinolizidine alkaloids and flavonoids are considered as the main bioactive components in this plant. To determine the distribution and content of the flavonoids in different organs of this plant, a rapid, sensitive and reproducible method was established using ultra-high-performance liquid chromatography coupled with a triple quadrupole electrospray tandem mass spectrometry. A total of sixteen flavonoids including five different types (isoflavones, pterocarpans, flavones, flavonols and prenylflavonoids) were simultaneously determined in 10 min. The established method was fully validated in terms of linearity, sensitivity, precision, repeatability as well as recovery and successfully applied in the methanolic extracts of S. flavescens parts (root, stem, leaf, pod and seed). The analysis results indicated that the distribution and contents of different type of flavonoids showed remarkable differences among the five organs of S. flavescens. This study might be useful for the rational utilization of S. flavescens resource. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Acute toxicity value extrapolation with fish and aquatic invertebrates

    USGS Publications Warehouse

    Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha

    2005-01-01

    Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are.

  5. Thermo-sensitive polymer nanospheres as a smart plugging agent for shale gas drilling operations.

    PubMed

    Wang, Wei-Ji; Qiu, Zheng-Song; Zhong, Han-Yi; Huang, Wei-An; Dai, Wen-Hao

    2017-01-01

    Emulsifier-free poly(methyl methacrylate-styrene) [P(MMA-St)] nanospheres with an average particle size of 100 nm were synthesized in an isopropyl alcohol-water medium by a solvothermal method. Then, through radical graft copolymerization of thermo-sensitive monomer N -isopropylacrylamide (NIPAm) and hydrophilic monomer acrylic acid (AA) onto the surface of P(MMA-St) nanospheres at 80 °C, a series of thermo-sensitive polymer nanospheres, named SD-SEAL with different lower critical solution temperatures (LCST), were prepared by adjusting the mole ratio of NIPAm to AA. The products were characterized by Fourier transform infrared spectroscopy, transmission electron microscopy, thermogravimetric analysis, particle size distribution, and specific surface area analysis. The temperature-sensitive behavior was studied by light transmittance tests, while the sealing performance was investigated by pressure transmission tests with Lungmachi Formation shales. The experimental results showed that the synthesized nanoparticles are sensitive to temperature and had apparent LCST values which increased with an increase in hydrophilic monomer AA. When the temperature was higher than its LCST value, SD-SEAL played a dual role of physical plugging and chemical inhibition, slowed down pressure transmission, and reduced shale permeability remarkably. The plugged layer of shale was changed to being hydrophobic, which greatly improved the shale stability.

  6. Optimal Output of Distributed Generation Based On Complex Power Increment

    NASA Astrophysics Data System (ADS)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  7. Cost-effectiveness analysis of EGFR mutation testing in patients with non-small cell lung cancer (NSCLC) with gefitinib or carboplatin-paclitaxel.

    PubMed

    Arrieta, Oscar; Anaya, Pablo; Morales-Oyarvide, Vicente; Ramírez-Tirado, Laura Alejandra; Polanco, Ana C

    2016-09-01

    Assess the cost-effectiveness of an EGFR-mutation testing strategy for advanced NSCLC in first-line therapy with either gefitinib or carboplatin-paclitaxel in Mexican institutions. Cost-effectiveness analysis using a discrete event simulation (DES) model to simulate two therapeutic strategies in patients with advanced NSCLC. Strategy one included patients tested for EGFR-mutation and therapy given accordingly. Strategy two included chemotherapy for all patients without testing. All results are presented in 2014 US dollars. The analysis was made with data from the Mexican frequency of EGFR-mutation. A univariate sensitivity analysis was conducted on EGFR prevalence. Progression-free survival (PFS) transition probabilities were estimated on data from the IPASS and simulated with a Weibull distribution, run with parallel trials to calculate a probabilistic sensitivity analysis. PFS of patients in the testing strategy was 6.76 months (95 % CI 6.10-7.44) vs 5.85 months (95 % CI 5.43-6.29) in the non-testing group. The one-way sensitivity analysis showed that PFS has a direct relationship with EGFR-mutation prevalence, while the ICER and testing cost have an inverse relationship with EGFR-mutation prevalence. The probabilistic sensitivity analysis showed that all iterations had incremental costs and incremental PFS for strategy 1 in comparison with strategy 2. There is a direct relationship between the ICER and the cost of EGFR testing, with an inverse relationship with the prevalence of EGFR-mutation. When prevalence is >10 % ICER remains constant. This study could impact Mexican and Latin American health policies regarding mutation detection testing and treatment for advanced NSCLC.

  8. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    PubMed Central

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716

  9. Peri-muscular adipose tissue may play a unique role in determining insulin sensitivity/resistance in women with polycystic ovary syndrome.

    PubMed

    Morrison, Shannon A; Goss, Amy M; Azziz, Ricardo; Raju, Dheeraj A; Gower, Barbara A

    2017-01-01

    Do the determinants of insulin sensitivity/resistance differ in women with and without polycystic ovary syndrome (PCOS)? Peri-muscular thigh adipose tissue is uniquely associated with insulin sensitivity/resistance in women with PCOS, whereas adiponectin and thigh subcutaneous adipose are the main correlates of insulin sensitivity/resistance in women without PCOS. In subject populations without PCOS, insulin sensitivity/resistance is determined by body fat distribution and circulating concentrations of hormones and pro-inflammatory mediators. Specifically, visceral (intra-abdominal) adipose tissue mass is adversely associated with insulin sensitivity, whereas thigh subcutaneous adipose appears protective against metabolic disease. Adiponectin is an insulin-sensitizing hormone produced by healthy subcutaneous adipose that may mediate the protective effect of thigh subcutaneous adipose. Testosterone, which is elevated in PCOS, may have an adverse effect on insulin sensitivity/resistance. Cross-sectional study of 30 women with PCOS and 38 women without PCOS; data were collected between 2007 and 2011. Participants were group-matched for obesity, as reflected in BMI (Mean ± SD; PCOS: 31.8 ± 6.0 kg/m 2 ; without PCOS: 31.5 ± 5.0 kg/m 2 ). The whole-body insulin sensitivity index (WBISI) was assessed using a mixed-meal tolerance test; Homeostasis Model Assessment-Insulin resistance (HOMA-IR) was determined from fasting insulin and glucose values. Adipose tissue distribution was determined by computed tomography (CT) scan. Partial correlation analysis, adjusting for total fat mass, was used to identify correlates of WBISI and HOMA-IR within each group of women from measures of body composition, body fat distribution, reproductive-endocrine hormones and adipokines/cytokines. Stepwise multiple linear regression analysis was used to identify the variables that best predicted WBISI and HOMA-IR. Among women with PCOS, both WBISI and HOMA-IR were best predicted by peri-muscular adipose tissue cross-sectional area. Among women without PCOS, both WBISI and HOMA-IR were best predicted by adiponectin and thigh subcutaneous adipose tissue. Small sample size, group matching for BMI and age, and the use of surrogate measures of insulin sensitivity/resistance. Because insulin resistance is the root cause of obesity and comorbidities in PCOS, determining its cause could lead to potential therapies. Present results suggest that peri-muscular adipose tissue may play a unique role in determining insulin sensitivity/resistance in women with PCOS. Interventions such as restriction of dietary carbohydrates that have been shown to selectively reduce fatty infiltration of skeletal muscle may decrease the risk for type 2 diabetes in women with PCOS. The study was supported by National Institutes of Health grants R01HD054960, R01DK67538, P30DK56336, P60DK079626, M014RR00032 and UL1RR025777. The authors have no conflicts of interest. NCT00726908. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. PCB congener analysis with Hall electrolytic conductivity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edstrom, R.D.

    1989-01-01

    This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less

  11. PISMA: A Visual Representation of Motif Distribution in DNA Sequences.

    PubMed

    Alcántara-Silva, Rogelio; Alvarado-Hermida, Moisés; Díaz-Contreras, Gibrán; Sánchez-Barrios, Martha; Carrera, Samantha; Galván, Silvia Carolina

    2017-01-01

    Because the graphical presentation and analysis of motif distribution can provide insights for experimental hypothesis, PISMA aims at identifying motifs on DNA sequences, counting and showing them graphically. The motif length ranges from 2 to 10 bases, and the DNA sequences range up to 10 kb. The motif distribution is shown as a bar-code-like, as a gene-map-like, and as a transcript scheme. We obtained graphical schemes of the CpG site distribution from 91 human papillomavirus genomes. Also, we present 2 analyses: one of DNA motifs associated with either methylation-resistant or methylation-sensitive CpG islands and another analysis of motifs associated with exosome RNA secretion. PISMA is developed in Java; it is executable in any type of hardware and in diverse operating systems. PISMA is freely available to noncommercial users. The English version and the User Manual are provided in Supplementary Files 1 and 2, and a Spanish version is available at www.biomedicas.unam.mx/wp-content/software/pisma.zip and www.biomedicas.unam.mx/wp-content/pdf/manual/pisma.pdf.

  12. Evolutionary dynamics of taxonomic structure

    PubMed Central

    Foote, Michael

    2012-01-01

    The distribution of species among genera and higher taxa has largely untapped potential to reveal among-clade variation in rates of origination and extinction. The probability distribution of the number of species within a genus is modelled with a stochastic, time-homogeneous birth–death model having two parameters: the rate of species extinction, μ, and the rate of genus origination, γ, each scaled as a multiple of the rate of within-genus speciation, λ. The distribution is more sensitive to γ than to μ, although μ affects the size of the largest genera. The species : genus ratio depends strongly on both γ and μ, and so is not a good diagnostic of evolutionary dynamics. The proportion of monotypic genera, however, depends mainly on γ, and so may provide an index of the genus origination rate. Application to living marine molluscs of New Zealand shows that bivalves have a higher relative rate of genus origination than gastropods. This is supported by the analysis of palaeontological data. This concordance suggests that analysis of living taxonomic distributions may allow inference of macroevolutionary dynamics even without a fossil record. PMID:21865239

  13. The methane distribution on Titan: high resolution spectroscopy in the near-IR with Keck NIRSPEC/AO

    NASA Astrophysics Data System (ADS)

    Adamkovics, Mate; Mitchell, Jonathan L.

    2014-11-01

    The distribution of methane on Titan is a diagnostic of regional scale meteorology and large scale atmospheric circulation. The observed formation of clouds and the transport of heat through the atmosphere both depend on spatial and temporal variations in methane humidity. We have performed observations to measure the the distribution on methane Titan using high spectral resolution near-IR (H-band) observations made with NIRSPEC, with adaptive optics, at Keck Observatory in July 2014. This work builds on previous attempts at this measurement with improvement in the observing protocol and data reduction, together with increased integration times. Radiative transfer models using line-by-line calculation of methane opacities from the HITRAN2012 database are used to retrieve methane abundances. We will describe analysis of the reduced observations, which show latitudinal spatial variation in the region the spectrum that is thought to be sensitive to methane abundance. Quantifying the methane abundance variation requires models that include the spatial variation in surface albedo and meridional haze gradient; we will describe (currently preliminary) analysis of the the methane distribution and uncertainties in the retrieval.

  14. PISMA: A Visual Representation of Motif Distribution in DNA Sequences

    PubMed Central

    Alcántara-Silva, Rogelio; Alvarado-Hermida, Moisés; Díaz-Contreras, Gibrán; Sánchez-Barrios, Martha; Carrera, Samantha; Galván, Silvia Carolina

    2017-01-01

    Background: Because the graphical presentation and analysis of motif distribution can provide insights for experimental hypothesis, PISMA aims at identifying motifs on DNA sequences, counting and showing them graphically. The motif length ranges from 2 to 10 bases, and the DNA sequences range up to 10 kb. The motif distribution is shown as a bar-code–like, as a gene-map–like, and as a transcript scheme. Results: We obtained graphical schemes of the CpG site distribution from 91 human papillomavirus genomes. Also, we present 2 analyses: one of DNA motifs associated with either methylation-resistant or methylation-sensitive CpG islands and another analysis of motifs associated with exosome RNA secretion. Availability and Implementation: PISMA is developed in Java; it is executable in any type of hardware and in diverse operating systems. PISMA is freely available to noncommercial users. The English version and the User Manual are provided in Supplementary Files 1 and 2, and a Spanish version is available at www.biomedicas.unam.mx/wp-content/software/pisma.zip and www.biomedicas.unam.mx/wp-content/pdf/manual/pisma.pdf. PMID:28469418

  15. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats.

    PubMed

    Pérez-Rodríguez, F; van Asselt, E D; Garcia-Gimeno, R M; Zurera, G; Zwietering, M H

    2007-05-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and stakeholders to make decisions on food safety management. The present study was conducted to investigate how detailed sensitivity analysis can be used by assessors to extract more information on risk factors and how results can be communicated to managers and stakeholders in an understandable way. The extended sensitivity analysis revealed that the extremes at the right side of the dose distribution (at consumption, 9 to 11.5 log CFU per serving) were responsible for most of the cases of listeriosis simulated. For concentration at retail, values below the detection limit of 0.04 CFU/g and the often used limit for L. monocytogenes of 100 CFU/g (also at retail) were associated with a high number of annual cases of listeriosis (about 29 and 82%, respectively). This association can be explained by growth of L. monocytogenes at both average and extreme values of temperature and time, indicating that a wide distribution can lead to high risk levels. Another finding is the importance of the maximal population density (i.e., the maximum concentration of L. monocytogenes assumed at a certain temperature) for accurately estimating the risk of infection by opportunistic pathogens such as L. monocytogenes. According to the obtained results, mainly concentrations corresponding to the highest maximal population densities caused risk in the simulation. However, sensitivity analysis applied to the uncertainty parameters revealed that prevalence at retail was the most important source of uncertainty in the model.

  16. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS).

    PubMed

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy.

  17. Interfacial stress state present in a 'thin-slice' fibre push-out test

    NASA Technical Reports Server (NTRS)

    Kallas, M. N.; Koss, D. A.; Hahn, H. T.; Hellmann, J. R.

    1992-01-01

    An analysis of the stress distributions along the fiber-matrix interface in a 'thin-slice' fiber push-out test is presented for selected test geometries. For the small specimen thicknesses often required to displace large-diameter fibers with high interfacial shear strengths, finite element analysis indicates that large bending stresses may be present. The magnitude of these stresses and their spatial distribution can be very sensitive to the test configuration. For certain test geometries, the specimen configuration itself may alter the interfacial failure process from one which initiates due to a maximum in shear stress near the top surface adjacent to the indentor, to one which involves mixed mode crack growth up from the bottom surface and/or yielding within the matrix near the interface.

  18. Simulation of probabilistic wind loads and building analysis

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.

  19. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  20. Frequency analysis for modulation-enhanced powder diffraction.

    PubMed

    Chernyshov, Dmitry; Dyadkin, Vadim; van Beek, Wouter; Urakawa, Atsushi

    2016-07-01

    Periodic modulation of external conditions on a crystalline sample with a consequent analysis of periodic diffraction response has been recently proposed as a tool to enhance experimental sensitivity for minor structural changes. Here the intensity distributions for both a linear and nonlinear structural response induced by a symmetric and periodic stimulus are analysed. The analysis is further extended for powder diffraction when an external perturbation changes not only the intensity of Bragg lines but also their positions. The derived results should serve as a basis for a quantitative modelling of modulation-enhanced diffraction data measured in real conditions.

  1. The topology of card transaction money flows

    NASA Astrophysics Data System (ADS)

    Zanin, Massimiliano; Papo, David; Romance, Miguel; Criado, Regino; Moral, Santiago

    2016-11-01

    Money flow models are essential tools to understand different economical phenomena, like saving propensities and wealth distributions. In spite of their importance, most of them are based on synthetic transaction networks with simple topologies, e.g. random or scale-free ones, as the characterisation of real networks is made difficult by the confidentiality and sensitivity of money transaction data. Here, we present an analysis of the topology created by real credit card transactions from one of the biggest world banks, and show how different distributions, e.g. number of transactions per card or amount, have nontrivial characteristics. We further describe a stochastic model to create transactions data sets, feeding from the obtained distributions, which will allow researchers to create more realistic money flow models.

  2. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  3. Analysis of scattering statistics and governing distribution functions in optical coherence tomography.

    PubMed

    Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-07-01

    The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.

  4. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Voss, Thomas

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  5. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model constructed using the same base data and forced with the VCSN precipitation dataset. Results of the comparison of the rainfall products show that there are significant differences in precipitation volume between the forcing products; in the order of 20% at most points. Even more significant differences can be seen, however, in the distribution of precipitation. For the VCSN data wet days (defined as >0.1mm precipitation) occur on some 20-30% of days (depending on location). This is reasonably reflected in the TRMM and CHIRPS data, while for the re-analysis based products some 60%to 80% of days are wet, albeit at lower intensities. These differences are amplified in the recharge estimates. At most points, volumetric differences are in the order of 40-60%, though difference may range into several orders of magnitude. The frequency distributions of recharge also differ significantly, with recharge over 0.1 mm occurring on 4-6% of days for the VCNS, CHIRPS, and TRMM datasets, but up to the order of 12% of days for the re-analysis data. Comparison against the lysimeter data show estimates to be reasonable, in particular for the reference datasets. Surprisingly some estimates of the lower resolution re-analysis datasets are reasonable, though this does seem to be due to lower recharge being compensated by recharge occurring more frequently. These results underline the importance of correct representation of rainfall volumes, as well as of distribution, particularly when evaluating possible changes to for example changes in precipitation intensity and volume. This holds for precipitation data derived from satellite based and re-analysis products, but also for interpolated data from gauges, where the distribution of intensities is strongly influenced by the interpolation process.

  6. The orthographic sensitivity to written Chinese in the occipital-temporal cortex.

    PubMed

    Liu, Haicheng; Jiang, Yi; Zhang, Bo; Ma, Lifei; He, Sheng; Weng, Xuchu

    2013-06-01

    Previous studies have identified an area in the left lateral fusiform cortex that is highly responsive to written words and has been named the visual word form area (VWFA). However, there is disagreement on the specific functional role of this area in word recognition. Chinese characters, which are dramatically different from Roman alphabets in the visual form and in the form to phonological mapping, provide a unique opportunity to investigate the properties of the VWFA. Specifically, to clarify the orthographic sensitivity in the mid-fusiform cortex, we compared fMRI response amplitudes (Exp. 1) as well as the spatial patterns of response across multiple voxels (Exp. 2) between Chinese characters and stimuli derived from Chinese characters with different orthographic properties. The fMRI response amplitude results suggest the existence of orthographic sensitivity in the VWFA. The results from multi-voxel pattern analysis indicate that spatial distribution of the responses across voxels in the occipitotemporal cortex contained discriminative information between the different types of character-related stimuli. These results together suggest that the orthographic rules are likely represented in a distributed neural network with the VWFA containing the most specific information regarding a stimulus' orthographic regularity.

  7. Cost analysis of a school-based comprehensive malaria program in primary schools in Sikasso region, Mali.

    PubMed

    Maccario, Roberta; Rouhani, Saba; Drake, Tom; Nagy, Annie; Bamadio, Modibo; Diarra, Seybou; Djanken, Souleymane; Roschnik, Natalie; Clarke, Siân E; Sacko, Moussa; Brooker, Simon; Thuilliez, Josselin

    2017-06-12

    The expansion of malaria prevention and control to school-aged children is receiving increasing attention, but there are still limited data on the costs of intervention. This paper analyses the costs of a comprehensive school-based intervention strategy, delivered by teachers, that included participatory malaria educational activities, distribution of long lasting insecticide-treated nets (LLIN), and Intermittent Parasite Clearance in schools (IPCs) in southern Mali. Costs were collected alongside a randomised controlled trial conducted in 80 primary schools in Sikasso Region in Mali in 2010-2012. Cost data were compiled between November 2011 and March 2012 for the 40 intervention schools (6413 children). A provider perspective was adopted. Using an ingredients approach, costs were classified by cost category and by activity. Total costs and cost per child were estimated for the actual intervention, as well as for a simpler version of the programme more suited for scale-up by the government. Univariate sensitivity analysis was performed. The economic cost of the comprehensive intervention was estimated to $10.38 per child (financial cost $8.41) with malaria education, LLIN distribution and IPCs costing $2.13 (20.5%), $5.53 (53.3%) and $2.72 (26.2%) per child respectively. Human resources were found to be the key cost driver, and training costs were the greatest contributor to overall programme costs. Sensitivity analysis showed that an adapted intervention delivering one LLIN instead of two would lower the economic cost to $8.66 per child; and that excluding LLIN distribution in schools altogether, for example in settings where malaria control already includes universal distribution of LLINs at community-level, would reduce costs to $4.89 per child. A comprehensive school-based control strategy may be a feasible and affordable way to address the burden of malaria among schoolchildren in the Sahel.

  8. A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. I. Model description and uncertainty analysis.

    PubMed

    Traas, T P; Luttik, R; Jongbloed, R H

    1996-08-01

    In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.

  9. Developmental and hormonal regulation of thermosensitive neuron potential activity in rat brain.

    PubMed

    Belugin, S; Akino, K; Takamura, N; Mine, M; Romanovsky, D; Fedoseev, V; Kubarko, A; Kosaka, M; Yamashita, S

    1999-08-01

    To understand the involvement of thyroid hormone on the postnatal development of hypothalamic thermosensitive neurons, we focused on the analysis of thermosensitive neuronal activity in the preoptic and anterior hypothalamic (PO/AH) regions of developing rats with and without hypothyroidism. In euthyroid rats, the distribution of thermosensitive neurons in PO/AH showed that in 3-week-old rats (46 neurons tested), 19.5% were warm-sensitive and 80.5% were nonsensitive. In 5- to 12-week-old euthyroid rats (122 neurons), 33.6% were warm-sensitive and 66.4% were nonsensitive. In 5- to 12-week-old hypothyroid rats (108 neurons), however, 18.5% were warm-sensitive and 81.5% were nonsensitive. Temperature thresholds of warm-sensitive neurons were lower in 12-week-old euthyroid rats (36.4+/-0.2 degrees C, n = 15, p<0.01,) than in 3-week-old and in 5-week-old euthyroid rats (38.5+/-0.5 degrees C, n = 9 and 38.0+/-0.3 degrees C, n = 15, respectively). The temperature thresholds of warm-sensitive neurons in 12-week-old hypothyroid rats (39.5+/-0.3 degrees C, n = 8) were similar to that of warm-sensitive neurons of 3-week-old raats (euthyroid and hypothyroid). In contrast, there was no difference in the thresholds of warm-sensitive neurons between hypothyroid and euthyroid rats at the age of 3-5 weeks. In conclusion, monitoring the thermosensitive neuronal tissue activity demonstrated the evidence that thyroid hormone regulates the maturation of warm-sensitive hypothalamic neurons in developing rat brain by electrophysiological analysis.

  10. The impact of global warming on the range distribution of different climatic groups of Aspidoscelis costata costata.

    PubMed

    Güizado-Rodríguez, Martha Anahí; Ballesteros-Barrera, Claudia; Casas-Andreu, Gustavo; Barradas-Miranda, Victor Luis; Téllez-Valdés, Oswaldo; Salgado-Ugarte, Isaías Hazarmabeth

    2012-12-01

    The ectothermic nature of reptiles makes them especially sensitive to global warming. Although climate change and its implications are a frequent topic of detailed studies, most of these studies are carried out without making a distinction between populations. Here we present the first study of an Aspidoscelis species that evaluates the effects of global warming on its distribution using ecological niche modeling. The aims of our study were (1) to understand whether predicted warmer climatic conditions affect the geographic potential distribution of different climatic groups of Aspidoscelis costata costata and (2) to identify potential altitudinal changes of these groups under global warming. We used the maximum entropy species distribution model (MaxEnt) to project the potential distributions expected for the years 2020, 2050, and 2080 under a single simulated climatic scenario. Our analysis suggests that some climatic groups of Aspidoscelis costata costata will exhibit reductions and in others expansions in their distribution, with potential upward shifts toward higher elevation in response to climate warming. Different climatic groups were revealed in our analysis that subsequently showed heterogeneous responses to climatic change illustrating the complex nature of species geographic responses to environmental change and the importance of modeling climatic or geographic groups and/or populations instead of the entire species' range treated as a homogeneous entity.

  11. Ictalurids in Iowa’s streams and rivers: Status, distribution, and relationships with biotic integrity

    USGS Publications Warehouse

    Sindt, Anthony R.; Fischer, Jesse R.; Quist, Michael C.; Pierce, Clay

    2011-01-01

    Anthropogenic alterations to Iowa’s landscape have greatly altered lotic systems with consequent effects on the biodiversity of freshwater fauna. Ictalurids are a diverse group of fishes and play an important ecological role in aquatic ecosystems. However, little is known about their distribution and status in lotic systems throughout Iowa. The purpose of this study was to describe the distribution of ictalurids in Iowa and examine their relationship with ecological integrity of streams and rivers. Historical data (i.e., 1884–2002) compiled for the Iowa Aquatic Gap Analysis Project (IAGAP) were used to detect declines in the distribution of ictalurids in Iowa streams and rivers at stream segment and watershed scales. Eight variables characterizing ictalurid assemblages were used to evaluate relationships with index of biotic integrity (IBI) ratings. Comparisons of recent and historic data from the IAGAP database indicated that 9 of Iowa’s 10 ictalurid species experienced distribution declines at one or more spatial scales. Analysis of variance indicated that ictalurid assemblages differed among samples with different IBI ratings. Specifically, total ictalurid, sensitive ictalurid, and Noturus spp. richness increased as IBI ratings increased. Results indicate declining ictalurid species distributions and biotic integrity are related, and management strategies aimed to improve habitat and increase biotic integrity will benefit ictalurid species.

  12. A statistical approach to nuclear fuel design and performance

    NASA Astrophysics Data System (ADS)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.

  13. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  14. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  15. Electron microprobe analysis and histochemical examination of the calcium distribution in human bone trabeculae: a methodological study using biopsy specimens from post-traumatic osteopenia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obrant, K.J.; Odselius, R.

    1984-01-01

    Energy dispersive X-ray microanalysis (EDX) (or electron microprobe analysis) of the relative intensity for calcium in different bone trabeculae from the tibia epiphysis, and in different parts of one and the same trabecula, was performed on 3 patients who had earlier had a fracture of the ipsilateral tibia-diaphysis. The variation in intensity was compared with the histochemical patterns obtained with both the Goldner and the von Kossa staining techniques for detecting calcium in tissues. Previously reported calcium distribution features, found to be typical for posttraumatic osteopenia, such as striated mineralization patterns in individual trabeculae and large differences in mineralization levelmore » between different trabeculae, could be verified both by means of the two histochemical procedures and from the electron microprobe analysis. A pronounced difference was observed, however, between the two histochemical staining techniques as regards their sensitivity to detect calcium. To judge from the values obtained from the EDX measurements, the sensitivity of the Goldner technique should be more than ten times higher than that of von Kossa. The EDX measurements gave more detailed information than either of the two histochemical techniques: great variations in the intensity of the calcium peak were found in trabeculae stained as unmineralized as well as mineralized.« less

  16. Spatial distribution variation and probabilistic risk assessment of exposure to chromium in ground water supplies; a case study in the east of Iran.

    PubMed

    Fallahzadeh, Reza Ali; Khosravi, Rasoul; Dehdashti, Bahare; Ghahramani, Esmail; Omidi, Fariborz; Adli, Abolfazl; Miri, Mohammad

    2018-05-01

    A high concentration of chromium (VI) in groundwater can threaten the health of consumers. In this study, the concentration of chromium (VI) in 18 drinking water wells in Birjand, Iran, s was investigated over a period of two yearsNon-carcinogenic risk assessment, sensitivity, and uncertainty analysis as well as the most important variables in determining the non-carcinogenic risk for three age groups including children, teens, and adults, were performed using the Monte Carlo simulations technique. The northern and southern regions of the study area had the highest and lowest chromium concentrations, respectively. The chromium concentrations in 16.66% of the samples in an area of 604.79 km2 were more than World Health Organization (WHO) guideline (0.05 mg/L). The Moran's index analysis showed that the distribution of contamination is a cluster. The Hazard Index (HI) values for the children and teens groups were 1.02 and 2.02, respectively, which was more than 1. A sensitivity analysis indicated that the most important factor in calculating the HQ was the concentration of chromium in the consumed water. HQ values higher than 1 represent a high risk for the children group, which should be controlled by removing the chromium concentration of the drinking water. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  18. Optical skin friction measurement technique in hypersonic wind tunnel

    NASA Astrophysics Data System (ADS)

    Chen, Xing; Yao, Dapeng; Wen, Shuai; Pan, Junjie

    2016-10-01

    Shear-sensitive liquid-crystal coatings (SSLCCs) have an optical characteristic that they are sensitive to the applied shear stress. Based on this, a novel technique is developed to measure the applied shear stress of the model surface regarding both its magnitude and direction in hypersonic flow. The system of optical skin friction measurement are built in China Academy of Aerospace Aerodynamics (CAAA). A series of experiments of hypersonic vehicle is performed in wind tunnel of CAAA. Global skin friction distribution of the model which shows complicated flow structures is discussed, and a brief mechanism analysis and an evaluation on optical measurement technique have been made.

  19. Working covariance model selection for generalized estimating equations.

    PubMed

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Study of polarization properties of fiber-optics probes with use of a binary phase plate.

    PubMed

    Alferov, S V; Khonina, S N; Karpeev, S V

    2014-04-01

    We conduct a theoretical and experimental study of the distribution of the electric field components in the sharp focal domain when rotating a zone plate with a π-phase jump placed in the focused beam. Comparing the theoretical and experimental results for several kinds of near-field probes, an analysis of the polarization sensitivity of different types of metal-coated aperture probes is conducted. It is demonstrated that with increasing diameter of the non-metal-coated tip part there occurs an essential redistribution of sensitivity in favor of the transverse electric field components and an increase of the probe's energy throughput.

  1. Novel railway-subgrade vibration monitoring technology using phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoyong; Lu, Bin; Zheng, Hanrong; Ye, Qing; Pan, Zhengqing; Cai, Haiwen; Qu, Ronghui; Fang, Zujie; Zhao, Howell

    2017-04-01

    High-speed railway is being developed rapidly; its safety, including infrastructure and train operation, is vital. This paper presents a railway-subgrade vibration monitoring scheme based on phase-sensitive OTDR for railway safety. The subgrade vibration is detected and rebuilt. Multi-dimension comprehensive analysis (MDCA) is proposed to identify the running train signals and illegal constructions along railway. To our best knowledge, it is the first time that a railway-subgrade vibration monitoring scheme is proposed. This scheme is proved effective by field tests for real-time train tracking and activities monitoring along railway. It provides a new passive distributed way for all-weather railway-subgrade vibration monitoring.

  2. Multi-Scale Morphological Analysis of Conductance Signals in Vertical Upward Gas-Liquid Two-Phase Flow

    NASA Astrophysics Data System (ADS)

    Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying

    2016-11-01

    The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.

  3. The Impact of Assimilating Precipitation-affected Radiance on Cloud and Precipitation in Goddard WRF-EDAS Analyses

    NASA Technical Reports Server (NTRS)

    Lin, Xin; Zhang, Sara Q.; Zupanski, M.; Hou, Arthur Y.; Zhang, J.

    2015-01-01

    High-frequency TMI and AMSR-E radiances, which are sensitive to precipitation over land, are assimilated into the Goddard Weather Research and Forecasting Model- Ensemble Data Assimilation System (WRF-EDAS) for a few heavy rain events over the continental US. Independent observations from surface rainfall, satellite IR brightness temperatures, as well as ground-radar reflectivity profiles are used to evaluate the impact of assimilating rain-sensitive radiances on cloud and precipitation within WRF-EDAS. The evaluations go beyond comparisons of forecast skills and domain-mean statistics, and focus on studying the cloud and precipitation features in the jointed rainradiance and rain-cloud space, with particular attentions on vertical distributions of height-dependent cloud types and collective effect of cloud hydrometers. Such a methodology is very helpful to understand limitations and sources of errors in rainaffected radiance assimilations. It is found that the assimilation of rain-sensitive radiances can reduce the mismatch between model analyses and observations by reasonably enhancing/reducing convective intensity over areas where the observation indicates precipitation, and suppressing convection over areas where the model forecast indicates rain but the observation does not. It is also noted that instead of generating sufficient low-level warmrain clouds as in observations, the model analysis tends to produce many spurious upperlevel clouds containing small amount of ice water content. This discrepancy is associated with insufficient information in ice-water-sensitive radiances to address the vertical distribution of clouds with small amount of ice water content. Such a problem will likely be mitigated when multi-channel multi-frequency radiances/reflectivity are assimilated over land along with sufficiently accurate surface emissivity information to better constrain the vertical distribution of cloud hydrometers.

  4. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  5. Star Cluster Properties in Two LEGUS Galaxies Computed with Stochastic Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik

    2015-10-01

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  6. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells.

    PubMed

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Neutron density distributions of neutron-rich nuclei studied with the isobaric yield ratio difference

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Bai, Xiao-Man; Yu, Jiao; Wei, Hui-Ling

    2014-09-01

    The isobaric yield ratio difference (IBD) between two reactions of similar experimental setups is found to be sensitive to nuclear density differences between projectiles. In this article, the IBD probe is used to study the density variation in neutron-rich 48Ca . By adjusting diffuseness in the neutron density distribution, three different neutron density distributions of 48Ca are obtained. The yields of fragments in the 80 A MeV 40, 48Ca + 12C reactions are calculated by using a modified statistical abrasion-ablation model. It is found that the IBD results obtained from the prefragments are sensitive to the density distribution of the projectile, while the IBD results from the final fragments are less sensitive to the density distribution of the projectile.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentgraf, Florian; Baum, Elias; Dreizler, Andreas

    Planar particle image velocimetry (PIV) and tomographic PIV (TPIV) measurements are utilized to analyze turbulent statistical theory quantities and the instantaneous turbulence within a single-cylinder optical engine. Measurements are performed during the intake and mid-compression stroke at 800 and 1500 RPM. TPIV facilitates the evaluation of spatially resolved Reynolds stress tensor (RST) distributions, anisotropic Reynolds stress invariants, and instantaneous turbulent vortical structures. The RST analysis describes distributions of individual velocity fluctuation components that arise from unsteady turbulent flow behavior as well as cycle-to-cycle variability (CCV). A conditional analysis, for which instantaneous PIV images are sampled by their tumble center location,more » reveals that CCV and turbulence have similar contributions to RST distributions at the mean tumble center, but turbulence is dominant in regions peripheral to the tumble center. Analysis of the anisotropic Reynolds stress invariants reveals the spatial distribution of axisymmetric expansion, axisymmetric contraction, and 3D isotropy within the cylinder. Findings indicate that the mid-compression flow exhibits a higher tendency toward 3D isotropy than the intake flow. A novel post-processing algorithm is utilized to classify the geometry of instantaneous turbulent vortical structures and evaluate their frequency of occurrence within the cylinder. Findings are coupled with statistical theory quantities to provide a comprehensive understanding of the distribution of turbulent velocity components, the distribution of anisotropic states of turbulence, and compare the turbulent vortical flow distribution that is theoretically expected to what is experimentally observed. The analyses reveal requisites of important turbulent flow quantities and discern their sensitivity to the local flow topography and engine operation.« less

  9. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  10. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  11. Computational study of the heat transfer of an avian egg in a tray.

    PubMed

    Eren Ozcan, S; Andriessens, S; Berckmans, D

    2010-04-01

    The development of an embryo in an avian egg depends largely on its temperature. The embryo temperature is affected by its environment and the heat produced by the egg. In this paper, eggshell temperature and the heat transfer characteristics from one egg in a tray toward its environment are studied by means of computational fluid dynamics (CFD). Computational fluid dynamics simulations have the advantage of providing extensive 3-dimensional information on velocity and eggshell temperature distribution around an egg that otherwise is not possible to obtain by experiments. However, CFD results need to be validated against experimental data. The objectives were (1) to find out whether CFD can successfully simulate eggshell temperature from one egg in a tray by comparing to previously conducted experiments, (2) to visualize air flow and air temperature distribution around the egg in a detailed way, and (3) to perform sensitivity analysis on several variables affecting heat transfer. To this end, a CFD model was validated using 2 sets of temperature measurements yielding an effective model. From these simulations, it can be concluded that CFD can effectively be used to analyze heat transfer characteristics and eggshell temperature distribution around an egg. In addition, air flow and temperature distribution around the egg are visualized. It has been observed that temperature differences up to 2.6 degrees C are possible at high heat production (285 mW) and horizontal low flow rates (0.5 m/s). Sensitivity analysis indicates that average eggshell temperature is mainly affected by the inlet air velocity and temperature, flow direction, and the metabolic heat of the embryo and less by the thermal conductivity and emissivity of the egg and thermal emissivity of the tray.

  12. Modelling and analysis of solar cell efficiency distributions

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  13. Determination of the proton-to-helium ratio in cosmic rays at ultra-high energies from the tail of the Xmax distribution

    NASA Astrophysics Data System (ADS)

    Yushkov, A.; Risse, M.; Werner, M.; Krieg, J.

    2016-12-01

    We present a method to determine the proton-to-helium ratio in cosmic rays at ultra-high energies. It makes use of the exponential slope, Λ, of the tail of the Xmax distribution measured by an air shower experiment. The method is quite robust with respect to uncertainties from modeling hadronic interactions and to systematic errors on Xmax and energy, and to the possible presence of primary nuclei heavier than helium. Obtaining the proton-to-helium ratio with air shower experiments would be a remarkable achievement. To quantify the applicability of a particular mass-sensitive variable for mass composition analysis despite hadronic uncertainties we introduce as a metric the 'analysis indicator' and find an improved performance of the Λ method compared to other variables currently used in the literature. The fraction of events in the tail of the Xmax distribution can provide additional information on the presence of nuclei heavier than helium in the primary beam.

  14. Characterization and identification of a chlorine-resistant bacterium, Sphingomonas TS001, from a model drinking water distribution system.

    PubMed

    Sun, Wenjun; Liu, Wenjun; Cui, Lifeng; Zhang, Minglu; Wang, Bei

    2013-08-01

    This study describes the identification and characterization of a new chlorine resistant bacterium, Sphingomonas TS001, isolated from a model drinking water distribution system. The isolate was identified by 16s rRNA gene analysis and morphological and physiological characteristics. Phylogenetic analysis indicates that TS001 belongs to the genus Sphingomonas. The model distribution system HPC results showed that, when the chlorine residual was greater than 0.7 mg L(-1), 100% of detected heterotrophic bacteria (HPC) was TS001. The bench-scale inactivation efficiency testing showed that this strain was very resistant to chlorine, and 4 mg L(-1) of chlorine with 240 min retention time provided only approximately 5% viability reduction of TS001. In contrast, a 3-log inactivation (99.9%) was obtained for UV fluencies of 40 mJ cm(-2). A high chlorine-resistant and UV sensitive bacterium, Sphingomonas TS001, was documented for the first time. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Multiscale Hierarchical Design of a Flexible Piezoresistive Pressure Sensor with High Sensitivity and Wide Linearity Range.

    PubMed

    Shi, Jidong; Wang, Liu; Dai, Zhaohe; Zhao, Lingyu; Du, Mingde; Li, Hongbian; Fang, Ying

    2018-05-30

    Flexible piezoresistive pressure sensors have been attracting wide attention for applications in health monitoring and human-machine interfaces because of their simple device structure and easy-readout signals. For practical applications, flexible pressure sensors with both high sensitivity and wide linearity range are highly desirable. Herein, a simple and low-cost method for the fabrication of a flexible piezoresistive pressure sensor with a hierarchical structure over large areas is presented. The piezoresistive pressure sensor consists of arrays of microscale papillae with nanoscale roughness produced by replicating the lotus leaf's surface and spray-coating of graphene ink. Finite element analysis (FEA) shows that the hierarchical structure governs the deformation behavior and pressure distribution at the contact interface, leading to a quick and steady increase in contact area with loads. As a result, the piezoresistive pressure sensor demonstrates a high sensitivity of 1.2 kPa -1 and a wide linearity range from 0 to 25 kPa. The flexible pressure sensor is applied for sensitive monitoring of small vibrations, including wrist pulse and acoustic waves. Moreover, a piezoresistive pressure sensor array is fabricated for mapping the spatial distribution of pressure. These results highlight the potential applications of the flexible piezoresistive pressure sensor for health monitoring and electronic skin. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Diagnosing and Mapping Pulmonary Emphysema on X-Ray Projection Images: Incremental Value of Grating-Based X-Ray Dark-Field Imaging

    PubMed Central

    Meinel, Felix G.; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö.; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F.; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    Purpose To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Materials and Methods Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Results Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. Conclusion In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections. PMID:23555692

  17. Diagnosing and mapping pulmonary emphysema on X-ray projection images: incremental value of grating-based X-ray dark-field imaging.

    PubMed

    Meinel, Felix G; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections.

  18. PHOTOTROPISM OF GERMINATING MYCELIA OF SOME PARASITIC FUNGI

    DTIC Science & Technology

    uredinales on young wheat plants; Distribution and significance of the phototropism of germinating mycelia -- confirmation of older data, examination of...eight additional uredinales, probable meaning of negative phototropism for the occurrence of infection; Analysis of the stimulus physiology of the...reaction -- the minimum effective illumination intensity, the effective special region, inversion of the phototropic reaction in liquid paraffin, the negative light- growth reaction, the light-sensitive zone.

  19. Wavelet analysis of biological tissue's Mueller-matrix images

    NASA Astrophysics Data System (ADS)

    Tomka, Yu. Ya.

    2008-05-01

    The interrelations between statistics of the 1st-4th orders of the ensemble of Mueller-matrix images and geometric structure of birefringent architectonic nets of different morphological structure have been analyzed. The sensitivity of asymmetry and excess of statistic distributions of matrix elements Cik to changing of orientation structure of optically anisotropic protein fibrils of physiologically normal and pathologically changed biological tissues architectonics has been shown.

  20. Uncertainty in Damage Detection, Dynamic Propagation and Just-in-Time Networks

    DTIC Science & Technology

    2015-08-03

    estimated parameter uncertainty in dynamic data sets; high order compact finite difference schemes for Helmholtz equations with discontinuous wave numbers...delay differential equations with a Gamma distributed delay. We found that with the same population size the histogram plots for the solution to the...schemes for Helmholtz equations with discontinuous wave numbers across interfaces. • We carried out numerical sensitivity analysis with respect to

  1. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  3. The Node Deployment of Intelligent Sensor Networks Based on the Spatial Difference of Farmland Soil.

    PubMed

    Liu, Naisen; Cao, Weixing; Zhu, Yan; Zhang, Jingchao; Pang, Fangrong; Ni, Jun

    2015-11-11

    Considering that agricultural production is characterized by vast areas, scattered fields and long crop growth cycles, intelligent wireless sensor networks (WSNs) are suitable for monitoring crop growth information. Cost and coverage are the most key indexes for WSN applications. The differences in crop conditions are influenced by the spatial distribution of soil nutrients. If the nutrients are distributed evenly, the crop conditions are expected to be approximately uniform with little difference; on the contrary, there will be great differences in crop conditions. In accordance with the differences in the spatial distribution of soil information in farmland, fuzzy c-means clustering was applied to divide the farmland into several areas, where the soil fertility of each area is nearly uniform. Then the crop growth information in the area could be monitored with complete coverage by deploying a sensor node there, which could greatly decrease the deployed sensor nodes. Moreover, in order to accurately judge the optimal cluster number of fuzzy c-means clustering, a discriminant function for Normalized Intra-Cluster Coefficient of Variation (NICCV) was established. The sensitivity analysis indicates that NICCV is insensitive to the fuzzy weighting exponent, but it shows a strong sensitivity to the number of clusters.

  4. Future Needs and Recommendations in the Development of Species Sensitivity Distributions: Estimating Toxicity Thresholds for Aquatic Ecological Communities and Assessing Impacts of Chemical Exposures

    EPA Science Inventory

    A species sensitivity distribution (SSD) is a probability model of the variation of species sensitivities to a stressor, in particular chemical exposure. The SSD approach has been used as a decision support tool in environmental protection and management since the 1980s, and the ...

  5. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  6. Adjustment of the dynamic weight distribution as a sensitive parameter for diagnosis of postural alteration in a rodent model of vestibular deficit

    PubMed Central

    Tighilet, Brahim; Péricat, David; Frelat, Alais; Cazals, Yves; Rastoldo, Guillaume; Boyer, Florent; Dumas, Olivier

    2017-01-01

    Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance. PMID:29112981

  7. Adjustment of the dynamic weight distribution as a sensitive parameter for diagnosis of postural alteration in a rodent model of vestibular deficit.

    PubMed

    Tighilet, Brahim; Péricat, David; Frelat, Alais; Cazals, Yves; Rastoldo, Guillaume; Boyer, Florent; Dumas, Olivier; Chabbert, Christian

    2017-01-01

    Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance.

  8. Automated Detector of High Frequency Oscillations in Epilepsy Based on Maximum Distributed Peak Points.

    PubMed

    Ren, Guo-Ping; Yan, Jia-Qing; Yu, Zhi-Xin; Wang, Dan; Li, Xiao-Nan; Mei, Shan-Shan; Dai, Jin-Dong; Li, Xiao-Li; Li, Yun-Lin; Wang, Xiao-Fei; Yang, Xiao-Feng

    2018-02-01

    High frequency oscillations (HFOs) are considered as biomarker for epileptogenicity. Reliable automation of HFOs detection is necessary for rapid and objective analysis, and is determined by accurate computation of the baseline. Although most existing automated detectors measure baseline accurately in channels with rare HFOs, they lose accuracy in channels with frequent HFOs. Here, we proposed a novel algorithm using the maximum distributed peak points method to improve baseline determination accuracy in channels with wide HFOs activity ranges and calculate a dynamic baseline. Interictal ripples (80-200[Formula: see text]Hz), fast ripples (FRs, 200-500[Formula: see text]Hz) and baselines in intracerebral EEGs from seven patients with intractable epilepsy were identified by experienced reviewers and by our computer-automated program, and the results were compared. We also compared the performance of our detector to four well-known detectors integrated in RIPPLELAB. The sensitivity and specificity of our detector were, respectively, 71% and 75% for ripples and 66% and 84% for FRs. Spearman's rank correlation coefficient comparing automated and manual detection was [Formula: see text] for ripples and [Formula: see text] for FRs ([Formula: see text]). In comparison to other detectors, our detector had a relatively higher sensitivity and specificity. In conclusion, our automated detector is able to accurately calculate a dynamic iEEG baseline in different HFO activity channels using the maximum distributed peak points method, resulting in higher sensitivity and specificity than other available HFO detectors.

  9. Analysis of the COS B data for evidence of linear polarization of VELA pulsar gamma rays

    NASA Astrophysics Data System (ADS)

    Mattox, John R.; Mayer-Hasselwander, Hans A.; Strong, Andy W.

    1990-11-01

    The COS B spark chamber telescope observations of the Vela pulsar were analyzed for gamma-ray polarization. No significant quadrupole moment is found in the azimuthal distribution of the electron-positron pair production planes. However, analysis of the sensitivity indicates that even 100-percent polarization would not be detected. Therefore, the null result does not constrain the polarization of the Vela pulsar gamma-ray emission. This result contradicts the report of Caraveo et al. (1988) of possible evidence for polarization of the Vela pulsar gamma rays.

  10. Characterization of a novel 132-bp exon of the human maxi-K channel.

    PubMed

    Korovkina, V P; Fergus, D J; Holdiman, A J; England, S K

    2001-07-01

    The large-conductance Ca2+-activated voltage-dependent K+ channel (maxi-K channel) induces a significant repolarizing current that buffers cell excitability. This channel can derive its diversity by alternative splicing of its transcript-producing isoforms that differ in their sensitivity to voltage and intracellular Ca2+. We have identified a novel 132-bp exon of the maxi-K channel from human myometrial cells that encodes 44 amino acids within the first intracellular loop of the channel protein. Distribution analysis reveals that this exon is expressed predominantly in human smooth muscle tissues with the highest abundance in the uterus and aorta and resembles the previously reported distribution of the total maxi-K channel transcript. Single-channel K+ current measurements in fibroblasts transfected with the maxi-K channel containing this novel 132-bp exon demonstrate that the presence of this insert attenuates the sensitivity to voltage and intracellular Ca2+. Alternative splicing to introduce this 132-bp exon into the maxi-K channel may elicit another mode to modulate cell excitability.

  11. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  12. Validation of a Brazilian version of the moral sensitivity questionnaire.

    PubMed

    Dalla Nora, Carlise R; Zoboli, Elma Lcp; Vieira, Margarida M

    2017-01-01

    Moral sensitivity has been identified as a foundational component of ethical action. Diminished or absent moral sensitivity can result in deficient care. In this context, assessing moral sensitivity is imperative for designing interventions to facilitate ethical practice and ensure that nurses make appropriate decisions. The main purpose of this study was to validate a scale for examining the moral sensitivity of Brazilian nurses. A pre-existing scale, the Moral Sensitivity Questionnaire, which was developed by Lützén, was used after the deletion of three items. The reliability and validity of the scale were examined using Cronbach's alpha and factor analysis, respectively. Participants and research context: Overall, 316 nurses from Rio Grande do Sul, Brazil, participated in the study. Ethical considerations: This study was approved by the Ethics Committee of Research of the Nursing School of the University of São Paulo. The Moral Sensitivity Questionnaire contained 27 items that were distributed across four dimensions: interpersonal orientation, professional knowledge, moral conflict and moral meaning. The questionnaire accounted for 55.8% of the total variance, with Cronbach's alpha of 0.82. The mean score for moral sensitivity was 4.45 (out of 7). The results of this study were compared with studies from other countries to examine the structure and implications of the moral sensitivity of nurses in Brazil. The Moral Sensitivity Questionnaire is an appropriate tool for examining the moral sensitivity of Brazilian nurses.

  13. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Introducing uncertainty analysis of nucleation and crystal growth models in Process Analytical Technology (PAT) system design of crystallization processes.

    PubMed

    Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul

    2013-11-01

    This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  16. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  17. A two-phase Poisson process model and its application to analysis of cancer mortality among A-bomb survivors.

    PubMed

    Ohtaki, Megu; Tonda, Tetsuji; Aihara, Kazuyuki

    2015-10-01

    We consider a two-phase Poisson process model where only early successive transitions are assumed to be sensitive to exposure. In the case where intensity transitions are low, we derive analytically an approximate formula for the distribution of time to event for the excess hazard ratio (EHR) due to a single point exposure. The formula for EHR is a polynomial in exposure dose. Since the formula for EHR contains no unknown parameters except for the number of total stages, number of exposure-sensitive stages, and a coefficient of exposure effect, it is applicable easily under a variety of situations where there exists a possible latency time from a single point exposure to occurrence of event. Based on the multistage hypothesis of cancer, we formulate a radiation carcinogenesis model in which only some early consecutive stages of the process are sensitive to exposure, whereas later stages are not affected. An illustrative analysis using the proposed model is given for cancer mortality among A-bomb survivors. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. CFD Simulation On The Pressure Distribution For An Isolated Single-Story House With Extension: Grid Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.

    2018-04-01

    Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.

  19. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  20. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS)

    PubMed Central

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy. PMID:22573960

  1. The physics of heavy quark distributions in hadrons: Collider tests

    NASA Astrophysics Data System (ADS)

    Brodsky, S. J.; Bednyakov, V. A.; Lykasov, G. I.; Smiesko, J.; Tokar, S.

    2017-03-01

    We present a review of the current understanding of the heavy quark distributions in the nucleon and their impact on collider physics. The origin of strange, charm and bottom quark pairs at high light-front (LF) momentum fractions in hadron wavefunction-the "intrinsic" quarks, is reviewed. The determination of heavy-quark parton distribution functions (PDFs) is particularly significant for the analysis of hard processes at LHC energies. We show that a careful study of the inclusive production of open charm and the production of γ / Z / W particles, accompanied by the heavy jets at large transverse momenta can give essential information on the intrinsic heavy quark (IQ) distributions. We also focus on the theoretical predictions concerning other observables which are very sensitive to the intrinsic charm contribution to PDFs including Higgs production at high xF and novel fixed target measurements which can be tested at the LHC.

  2. The physics of heavy quark distributions in hadrons: Collider tests

    DOE PAGES

    Brodsky, S. J.; Bednyakov, V. A.; Lykasov, G. I.; ...

    2016-12-18

    Here, we present a review of the current understanding of the heavy quark distributions in the nucleon and their impact on collider physics. The origin of strange, charm and bottom quark pairs at high light-front (LF) momentum fractions in hadron wavefunction—the “intrinsic” quarks, is reviewed. The determination of heavy-quark parton distribution functions (PDFs) is particularly significant for the analysis of hard processes at LHC energies. We show that a careful study of the inclusive production of open charm and the production of γ/Z/W particles, accompanied by the heavy jets at large transverse momenta can give essential information on the intrinsicmore » heavy quark (IQ) distributions. We also focus on the theoretical predictions concerning other observables which are very sensitive to the intrinsic charm contribution to PDFs including Higgs production at high x F and novel fixed target measurements which can be tested at the LHC.« less

  3. Optimization of a stand-alone Solar PV-Wind-DG Hybrid System for Distributed Power Generation at Sagar Island

    NASA Astrophysics Data System (ADS)

    Roy, P. C.; Majumder, A.; Chakraborty, N.

    2010-10-01

    An estimation of a stand-alone solar PV and wind hybrid system for distributed power generation has been made based on the resources available at Sagar island, a remote area distant to grid operation. Optimization and sensitivity analysis has been made to evaluate the feasibility and size of the power generation unit. A comparison of the different modes of hybrid system has been studied. It has been estimated that Solar PV-Wind-DG hybrid system provides lesser per unit electricity cost. Capital investment is observed to be lesser when the system run with Wind-DG compared to Solar PV-DG.

  4. Vertical Transport of Aerosol Particles across Mountain Topography near the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Murray, J. J.; Schill, S.; Freeman, S.; Bertram, T. H.; Lefer, B. L.

    2015-12-01

    Transport of aerosol particles is known to affect air quality and is largely dependent on the characteristic topography of the surrounding region. To characterize this transport, aerosol number distributions were collected with an Ultra-High Sensitivity Aerosol Spectrometer (UHSAS, DMT) during the 2015 NASA Student Airborne Research Program (SARP) in and around the Los Angeles Basin in Southern California. Increases in particle number concentration and size were observed over mountainous terrain north of Los Angeles County. Chemical analysis and meteorological lagrangian trajectories suggest orographic lifting processes, known as the "chimney effect". Implications for spatial transport and distribution will be discussed.

  5. Sensitivity and accuracy of hybrid fluorescence-mediated tomography in deep tissue regions.

    PubMed

    Rosenhain, Stefanie; Al Rawashdeh, Wa'el; Kiessling, Fabian; Gremse, Felix

    2017-09-01

    Fluorescence-mediated tomography (FMT) enables noninvasive assessment of the three-dimensional distribution of near-infrared fluorescence in mice. The combination with micro-computed tomography (µCT) provides anatomical data, enabling improved fluorescence reconstruction and image analysis. The aim of our study was to assess sensitivity and accuracy of µCT-FMT under realistic in vivo conditions in deeply-seated regions. Accordingly, we acquired fluorescence reflectance images (FRI) and µCT-FMT scans of mice which were prepared with rectal insertions with different amounts of fluorescent dye. Default and high-sensitivity scans were acquired and background signal was analyzed for three FMT channels (670 nm, 745 nm, and 790 nm). Analysis was performed for the original and an improved FMT reconstruction using the µCT data. While FRI and the original FMT reconstruction could detect 100 pmol, the improved FMT reconstruction could detect 10 pmol and significantly improved signal localization. By using a finer sampling grid and increasing the exposure time, the sensitivity could be further improved to detect 0.5 pmol. Background signal was highest in the 670 nm channel and most prominent in the gastro-intestinal tract and in organs with high relative amounts of blood. In conclusion, we show that µCT-FMT allows sensitive and accurate assessment of fluorescence in deep tissue regions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. On Learning Cluster Coefficient of Private Networks

    PubMed Central

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2013-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843

  7. Sensitivity of Rabbit Ventricular Action Potential and Ca2+ Dynamics to Small Variations in Membrane Currents and Ion Diffusion Coefficients

    PubMed Central

    Lo, Yuan Hung; Peachey, Tom; Abramson, David; McCulloch, Andrew

    2013-01-01

    Little is known about how small variations in ionic currents and Ca2+ and Na+ diffusion coefficients impact action potential and Ca2+ dynamics in rabbit ventricular myocytes. We applied sensitivity analysis to quantify the sensitivity of Shannon et al. model (Biophys. J., 2004) to 5%–10% changes in currents conductance, channels distribution, and ion diffusion in rabbit ventricular cells. We found that action potential duration and Ca2+ peaks are highly sensitive to 10% increase in L-type Ca2+ current; moderately influenced by 10% increase in Na+-Ca2+ exchanger, Na+-K+ pump, rapid delayed and slow transient outward K+ currents, and Cl− background current; insensitive to 10% increases in all other ionic currents and sarcoplasmic reticulum Ca2+ fluxes. Cell electrical activity is strongly affected by 5% shift of L-type Ca2+ channels and Na+-Ca2+ exchanger in between junctional and submembrane spaces while Ca2+-activated Cl−-channel redistribution has the modest effect. Small changes in submembrane and cytosolic diffusion coefficients for Ca2+, but not in Na+ transfer, may alter notably myocyte contraction. Our studies highlight the need for more precise measurements and further extending and testing of the Shannon et al. model. Our results demonstrate usefulness of sensitivity analysis to identify specific knowledge gaps and controversies related to ventricular cell electrophysiology and Ca2+ signaling. PMID:24222910

  8. Molecular analysis of tumor margins by MALDI mass spectrometry in renal carcinoma.

    PubMed

    Oppenheimer, Stacey R; Mi, Deming; Sanders, Melinda E; Caprioli, Richard M

    2010-05-07

    The rate of tumor recurrence post resection suggests that there are underlying molecular changes in nearby histologically normal tissue that go undetected by conventional diagnostic methods that utilize contrast agents and immunohistochemistry. MALDI MS is a molecular technology that has the specificity and sensitivity to monitor and identify molecular species indicative of these changes. The current study utilizes this technology to assess molecular distributions within a tumor and adjacent normal tissue in clear cell renal cell carcinoma biopsies. Results indicate that the histologically normal tissue adjacent to the tumor expresses many of the molecular characteristics of the tumor. Proteins of the mitochondrial electron transport system are examples of such distributions. This work demonstrates the utility of MALDI MS for the analysis of tumor tissue in the elucidation of aberrant molecular changes in the tumor microenvironment.

  9. Qualitative human body composition analysis assessed with bioelectrical impedance.

    PubMed

    Talluri, T

    1998-12-01

    Body composition is generally aiming at quantitative estimates of fat mass, inadequate to assess nutritional states that on the other hand are well defined by the intra/extra cellular masses proportion (ECM/BCM). Direct measures performed with phase sensitive bioelectrical impedance analyzers can be used to define the current distribution in normal and abnormal populations. Phase angle and reactance nomogram is directly reflecting the ECM/BCM pathways proportions and body impedance analysis (BIA) is also validated to estimate the individual content of body cell mass (BCM). A new body cell mass index (BCMI) obtained dividing the weight of BCM in kilograms by the body surface in square meters is confronted to the scatterplot distribution of phase angle and reactance values obtained from controls and patients, and proposed as a qualitative approach to identify abnormal ECM/BCM ratios and nutritional states.

  10. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  11. Simulation studies of wide and medium field of view earth radiation data analysis

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1978-01-01

    A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.

  12. Blade design and analysis using a modified Euler solver

    NASA Technical Reports Server (NTRS)

    Leonard, O.; Vandenbraembussche, R. A.

    1991-01-01

    An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.

  13. Characterizing the Sensitivity of Groundwater Storage to Climate variation in the Indus Basin

    NASA Astrophysics Data System (ADS)

    Huang, L.; Sabo, J. L.

    2017-12-01

    Indus Basin represents an extensive groundwater aquifer facing the challenge of effective management of limited water resources. Groundwater storage is one of the most important variables of water balance, yet its sensitivity to climate change has rarely been explored. To better estimate present and future groundwater storage and its sensitivity to climate change in the Indus Basin, we analyzed groundwater recharge/discharge and their historical evolution in this basin. Several methods are applied to specify the aquifer system including: water level change and storativity estimates, gravity estimates (GRACE), flow model (MODFLOW), water budget analysis and extrapolation. In addition, all of the socioeconomic and engineering aspects are represented in the hydrological system through the change of temporal and spatial distributions of recharge and discharge (e.g., land use, crop structure, water allocation, etc.). Our results demonstrate that the direct impacts of climate change will result in unevenly distributed but increasing groundwater storage in the short term through groundwater recharge. In contrast, long term groundwater storage will decrease as a result of combined indirect and direct impacts of climate change (e.g. recharge/discharge and human activities). The sensitivity of groundwater storage to climate variation is characterized by topography, aquifer specifics and land use. Furthermore, by comparing possible outcomes of different human interventions scenarios, our study reveals human activities play an important role in affecting the sensitivity of groundwater storage to climate variation. Over all, this study presents the feasibility and value of using integrated hydrological methods to support sustainable water resource management under climate change.

  14. [Individual variation in the frequency of chromosome aberrations under the influence of chemical mutagens. I. Inter-cultural and inter-individual variations in the effect of mutagens on human lymphocytes].

    PubMed

    Iakovenko, K N; Tarusina, T O

    1976-01-01

    The study of the distribution law of human peripheral blood cultures for the sensitivity to thiophosphamide was performed. In the first experiment the blood from one person was used, in the second one the blood was used from different persons. "The percent of aberrant cells" and "the number of chromosome breaks per 100 cells" were scored. The distribution law of the cultures in all the experiments was found to be normal. Analysis of the variances on the percent of aberrant cells showed that the distribution law of the cultures received from one donor corresponded to the binomial one, and that of the cultures received from different donors--to the Poisson's one.

  15. Temperature-compensated distributed hydrostatic pressure sensor with a thin-diameter polarization-maintaining photonic crystal fiber based on Brillouin dynamic gratings.

    PubMed

    Teng, Lei; Zhang, Hongying; Dong, Yongkang; Zhou, Dengwang; Jiang, Taofei; Gao, Wei; Lu, Zhiwei; Chen, Liang; Bao, Xiaoyi

    2016-09-15

    A temperature-compensated distributed hydrostatic pressure sensor based on Brillouin dynamic gratings (BDGs) is proposed and demonstrated experimentally for the first time, to the best of our knowledge. The principle is to measure the hydrostatic pressure induced birefringence changes through exciting and probing the BDGs in a thin-diameter pure silica polarization-maintaining photonic crystal fiber. The temperature cross-talk to the hydrostatic pressure sensing can be compensated through measuring the temperature-induced Brillouin frequency shift (BFS) changes using Brillouin optical time-domain analysis. A distributed measurement of hydrostatic pressure is demonstrated experimentally using a 4-m sensing fiber, which has a high sensitivity, with a maximum measurement error less than 0.03 MPa at a 20-cm spatial resolution.

  16. Software for illustrative presentation of basic clinical characteristics of laboratory tests--GraphROC for Windows.

    PubMed

    Kairisto, V; Poola, A

    1995-01-01

    GraphROC for Windows is a program for clinical test evaluation. It was designed for the handling of large datasets obtained from clinical laboratory databases. In the user interface, graphical and numerical presentations are combined. For simplicity, numerical data is not shown unless requested. Relevant numbers can be "picked up" from the graph by simple mouse operations. Reference distributions can be displayed by using automatically optimized bin widths. Any percentile of the distribution with corresponding confidence limits can be chosen for display. In sensitivity-specificity analysis, both illness- and health-related distributions are shown in the same graph. The following data for any cutoff limit can be shown in a separate click window: clinical sensitivity and specificity with corresponding confidence limits, positive and negative likelihood ratios, positive and negative predictive values and efficiency. Predictive values and clinical efficiency of the cutoff limit can be updated for any prior probability of disease. Receiver Operating Characteristics (ROC) curves can be generated and combined into the same graph for comparison of several different tests. The area under the curve with corresponding confidence interval is calculated for each ROC curve. Numerical results of analyses and graphs can be printed or exported to other Microsoft Windows programs. GraphROC for Windows also employs a new method, developed by us, for the indirect estimation of health-related limits and change limits from mixed distributions of clinical laboratory data.

  17. Birth weight, current anthropometric markers, and high sensitivity C-reactive protein in Brazilian school children.

    PubMed

    Boscaini, Camile; Pellanda, Lucia Campos

    2015-01-01

    Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.

  18. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  19. Prospective trial evaluating the sensitivity and specificity of 3,4-dihydroxy-6-[18F]-fluoro-L-phenylalanine (18F-DOPA) PET and MRI in patients with recurrent gliomas.

    PubMed

    Youland, Ryan S; Pafundi, Deanna H; Brinkmann, Debra H; Lowe, Val J; Morris, Jonathan M; Kemp, Bradley J; Hunt, Christopher H; Giannini, Caterina; Parney, Ian F; Laack, Nadia N

    2018-05-01

    Treatment-related changes can be difficult to differentiate from progressive glioma using MRI with contrast (CE). The purpose of this study is to compare the sensitivity and specificity of 18F-DOPA-PET and MRI in patients with recurrent glioma. Thirteen patients with MRI findings suspicious for recurrent glioma were prospectively enrolled and underwent 18F-DOPA-PET and MRI for neurosurgical planning. Stereotactic biopsies were obtained from regions of concordant and discordant PET and MRI CE, all within regions of T2/FLAIR signal hyperintensity. The sensitivity and specificity of 18F-DOPA-PET and CE were calculated based on histopathologic analysis. Receiver operating characteristic curve analysis revealed optimal tumor to normal (T/N) and SUVmax thresholds. In the 37 specimens obtained, 51% exhibited MRI contrast enhancement (M+) and 78% demonstrated 18F-DOPA-PET avidity (P+). Imaging characteristics included M-P- in 16%, M-P+ in 32%, M+P+ in 46% and M+P- in 5%. Histopathologic review of biopsies revealed grade II components in 16%, grade III in 43%, grade IV in 30% and no tumor in 11%. MRI CE sensitivity for recurrent tumor was 52% and specificity was 50%. PET sensitivity for tumor was 82% and specificity was 50%. A T/N threshold > 2.0 altered sensitivity to 76% and specificity to 100% and SUVmax > 1.36 improved sensitivity and specificity to 94 and 75%, respectively. 18F-DOPA-PET can provide increased sensitivity and specificity compared with MRI CE for visualizing the spatial distribution of recurrent gliomas. Future studies will incorporate 18F-DOPA-PET into re-irradiation target volume delineation for RT planning.

  20. Real-Time Distributed Algorithms for Visual and Battlefield Reasoning

    DTIC Science & Technology

    2006-08-01

    High-Level Task Definition Language, Graphical User Interface (GUI), Story Analysis, Story Interpretation, SensIT Nodes 16. SECURITY...or more actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such...actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such task

  1. Surveying Future Surveys

    NASA Astrophysics Data System (ADS)

    Carlstrom, John E.

    2016-06-01

    The now standard model of cosmology has been tested and refined by the analysis of increasingly sensitive, large astronomical surveys, especially with statistically significant millimeter-wave surveys of the cosmic microwave background and optical surveys of the distribution of galaxies. This talk will offer a glimpse of the future, which promises an acceleration of this trend with cosmological information coming from new surveys across the electromagnetic spectrum as well as particles and even gravitational waves.

  2. Fall 2014 SEI Research Review Probabilistic Analysis of Time Sensitive Systems

    DTIC Science & Technology

    2014-10-28

    Osmosis SMC Tool Osmosis is a tool for Statistical Model Checking (SMC) with Semantic Importance Sampling. • Input model is written in subset of C...ASSERT() statements in model indicate conditions that must hold. • Input probability distributions defined by the user. • Osmosis returns the...on: – Target relative error, or – Set number of simulations Osmosis Main Algorithm 1 http://dreal.cs.cmu.edu/ (?⃑?): Indicator

  3. Prevalence and trends of infection with Mycobacterium tuberculosis in Djibouti, testing an alternative method.

    PubMed

    Trébucq, A; Guérin, N; Ali Ismael, H; Bernatas, J J; Sèvre, J P; Rieder, H L

    2005-10-01

    Djibouti, 1994 and 2001. To estimate the prevalence of tuberculosis (TB) and average annual risk of TB infection (ARTI) and trends, and to test a new method for calculations. Tuberculin surveys among schoolchildren and sputum smear-positive TB patients. Prevalence of infection was calculated using cut-off points, the mirror image technique, mixture analysis, and a new method based on the operating characteristics of the tuberculin test. Test sensitivity was derived from tuberculin reactions among TB patients and test specificity from a comparison of reaction size distributions among children with and without a BCG scar. The ARTI was estimated to lie between 2.6% and 3.1%, with no significant changes between 1994 and 2001. The close match of the distributions between children tested in 1994 and patients justifies the utilisation of the latter to determine test sensitivity. This new method gave very consistent estimates of prevalence of infection for any induration for values between 15 and 20 mm. Specificity was successfully determined for 1994, but not for 2001. Mixture analysis confirmed the estimates obtained with the new method. Djibouti has a high ARTI, and no apparent change over the observation time was found. Using operating test characteristics to estimate prevalence of infection looks promising.

  4. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  5. Modelling ecological and human exposure to POPs in Venice lagoon - Part II: Quantitative uncertainty and sensitivity analysis in coupled exposure models.

    PubMed

    Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio

    2016-11-01

    The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The experiment and analysis of tailoring V(sub L) and I(sub P) with ZnO voltage-sensitive resistor on HT-6M

    NASA Astrophysics Data System (ADS)

    Fan, Shuping; Liu, Baohua; Ye, Minyou; Luo, Jiarong

    1992-12-01

    The idea of improving plateau with ZnO 'varistor' (voltage sensitive resistor) is presented. The result of tailoring V(sub L) and I(sub P) experiment on HT-6M tokamak is introduced. An improved tens millisecond plateau was achieved ((Delta) V(sub L)/V(sub L) less than 5%, (Delta)I(sub p)/I(sub p) less than 5%, (Delta)N(sub e)/N(sub e) less than 10%). Obviously, it is of great importance for many diagnostic measurements and further physics experiments to have the constant distribution of temperature and density. A simplified analysis of the actual poloidal circuit of HT-6M is given. The numerical simulation and the result of experiment are compared. The operating principle of the varistor and its application on iron core transformer tokamak in plateau and rising phase are mentioned.

  7. Trace and surface analysis of ceramic layers of solid oxide fuel cells by mass spectrometry.

    PubMed

    Becker, J S; Breuer, U; Westheide, J; Saprykin, A I; Holzbrecher, H; Nickel, H; Dietze, H J

    1996-06-01

    For the trace analysis of impurities in thick ceramic layers of a solid oxide fuel cell (SOFC) sensitive solid-state mass spectrometric methods, such as laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and radiofrequency glow discharge mass spectrometry (rf-GDMS) have been developed and used. In order to quantify the analytical results of LA-ICP-MS, the relative sensitivity coefficients of elements in a La(0.6)Sr(0.35)MnO(3) matrix have been determined using synthetic standards. Secondary ion mass spectrometry (SIMS) - as a surface analytical method - has been used to characterize the element distribution and diffusion profiles of matrix elements on the interface of a perovskite/Y-stabilized ZrO(2) layer. The application of different mass spectrometric methods for process control in the preparation of ceramic layers for the SOFC is described.

  8. Brillouin Optical Correlation Domain Analysis in Composite Material Beams

    PubMed Central

    Stern, Yonatan; London, Yosef; Preter, Eyal; Antman, Yair; Diamandi, Hilel Hagai; Silbiger, Maayan; Adler, Gadi; Shalev, Doron; Zadok, Avi

    2017-01-01

    Structural health monitoring is a critical requirement in many composites. Numerous monitoring strategies rely on measurements of temperature or strain (or both), however these are often restricted to point-sensing or to the coverage of small areas. Spatially-continuous data can be obtained with optical fiber sensors. In this work, we report high-resolution distributed Brillouin sensing over standard fibers that are embedded in composite structures. A phase-coded, Brillouin optical correlation domain analysis (B-OCDA) protocol was employed, with spatial resolution of 2 cm and sensitivity of 1 °K or 20 micro-strain. A portable measurement setup was designed and assembled on the premises of a composite structures manufacturer. The setup was successfully utilized in several structural health monitoring scenarios: (a) monitoring the production and curing of a composite beam over 60 h; (b) estimating the stiffness and Young’s modulus of a composite beam; and (c) distributed strain measurements across the surfaces of a model wing of an unmanned aerial vehicle. The measurements are supported by the predictions of structural analysis calculations. The results illustrate the potential added values of high-resolution, distributed Brillouin sensing in the structural health monitoring of composites. PMID:28974041

  9. Brillouin Optical Correlation Domain Analysis in Composite Material Beams.

    PubMed

    Stern, Yonatan; London, Yosef; Preter, Eyal; Antman, Yair; Diamandi, Hilel Hagai; Silbiger, Maayan; Adler, Gadi; Levenberg, Eyal; Shalev, Doron; Zadok, Avi

    2017-10-02

    Structural health monitoring is a critical requirement in many composites. Numerous monitoring strategies rely on measurements of temperature or strain (or both), however these are often restricted to point-sensing or to the coverage of small areas. Spatially-continuous data can be obtained with optical fiber sensors. In this work, we report high-resolution distributed Brillouin sensing over standard fibers that are embedded in composite structures. A phase-coded, Brillouin optical correlation domain analysis (B-OCDA) protocol was employed, with spatial resolution of 2 cm and sensitivity of 1 °K or 20 micro-strain. A portable measurement setup was designed and assembled on the premises of a composite structures manufacturer. The setup was successfully utilized in several structural health monitoring scenarios: (a) monitoring the production and curing of a composite beam over 60 h; (b) estimating the stiffness and Young's modulus of a composite beam; and (c) distributed strain measurements across the surfaces of a model wing of an unmanned aerial vehicle. The measurements are supported by the predictions of structural analysis calculations. The results illustrate the potential added values of high-resolution, distributed Brillouin sensing in the structural health monitoring of composites.

  10. Tissue distribution of pretomanid in rat brain via mass spectrometry imaging.

    PubMed

    Shobo, Adeola; Bratkowska, Dominika; Baijnath, Sooraj; Naiker, Suhashni; Somboro, Anou M; Bester, Linda A; Singh, Sanil D; Naicker, Tricia; Kruger, Hendrik G; Govender, Thavendran

    2016-01-01

    1. Matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) combines the sensitivity and selectivity of mass spectrometry with spatial analysis to provide a new dimension for histological analyses of the distribution of drugs in tissue. Pretomanid is a pro-drug belonging to a class of antibiotics known as nitroimidizoles, which have been proven to be active under hypoxic conditions and to the best of our knowledge there have been no studies investigating the distribution and localisation of this class of compounds in the brain using MALDI MSI. 2. Herein, we report on the distribution of pretomanid in the healthy rat brain after intraperitoneal administration (20 mg/kg) using MALDI MSI. Our findings showed that the drug localises in specific compartments of the rat brain viz. the corpus callosum, a dense network of neurons connecting left and right cerebral hemispheres. 3. This study proves that MALDI MSI technique has great potential for mapping the pretomanid distribution in uninfected tissue samples, without the need for molecular labelling.

  11. Three Dimensional Distribution of Sensitive Field and Stress Field Inversion of Force Sensitive Materials under Constant Current Excitation.

    PubMed

    Zhao, Shuanfeng; Liu, Min; Guo, Wei; Zhang, Chuanwei

    2018-02-28

    Force sensitive conductive composite materials are functional materials which can be used as the sensitive material of force sensors. However, the existing sensors only use one-dimensional electrical properties of force sensitive conductive materials. Even in tactile sensors, the measurement of contact pressure is achieved by large-scale arrays and the units of a large-scale array are also based on the one-dimensional electrical properties of force sensitive materials. The main contribution of this work is to study the three-dimensional electrical properties and the inversion method of three-dimensional stress field of a force sensitive material (conductive rubber), which pushes the application of force sensitive material from one dimensional to three-dimensional. First, the mathematical model of the conductive rubber current field distribution under a constant force is established by the effective medium theory, and the current field distribution model of conductive rubber with different geometry, conductive rubber content and conductive rubber relaxation parameters is deduced. Secondly, the inversion method of the three-dimensional stress field of conductive rubber is established, which provides a theoretical basis for the design of a new tactile sensor, three-dimensional stress field and space force based on force sensitive materials.

  12. Self-assembled two-dimensional gold nanoparticle film for sensitive nontargeted analysis of food additives with surface-enhanced Raman spectroscopy.

    PubMed

    Wu, Yiping; Yu, Wenfang; Yang, Benhong; Li, Pan

    2018-05-15

    The use of different food additives and their active metabolites has been found to cause serious problems to human health. Thus, considering the potential effects on human health, developing a sensitive and credible analytical method for different foods is important. Herein, the application of solvent-driven self-assembled Au nanoparticles (Au NPs) for the rapid and sensitive detection of food additives in different commercial products is reported. The assembled substrates are highly sensitive and exhibit excellent uniformity and reproducibility because of uniformly distributed and high-density hot spots. The sensitive analyses of ciprofloxacin (CF), diethylhexyl phthalate (DEHP), tartrazine and azodicarbonamide at the 0.1 ppm level using this surface-enhanced Raman spectroscopy (SERS) substrate are given, and the results show that Au NP arrays can serve as efficient SERS substrates for the detection of food additives. More importantly, SERS spectra of several commercial liquors and sweet drinks are obtained to evaluate the addition of illegal additives. This SERS active platform can be used as an effective strategy in the detection of prohibited additives in food.

  13. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  14. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE PAGES

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; ...

    2017-12-07

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, E.P.; Johnson, K.I.; Simonen, F.A.

    The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less

  16. Higher moments of net-proton multiplicity distributions in a heavy-ion event pile-up scenario

    NASA Astrophysics Data System (ADS)

    Garg, P.; Mishra, D. K.

    2017-10-01

    High-luminosity modern accelerators, like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN), inherently have event pile-up scenarios which significantly contribute to physics events as a background. While state-of-the-art tracking algorithms and detector concepts take care of these event pile-up scenarios, several offline analytical techniques are used to remove such events from the physics analysis. It is still difficult to identify the remaining pile-up events in an event sample for physics analysis. Since the fraction of these events is significantly small, it may not be as serious of an issue for other analyses as it would be for an event-by-event analysis. Particularly when the characteristics of the multiplicity distribution are observable, one needs to be very careful. In the present work, we demonstrate how a small fraction of residual pile-up events can change the moments and their ratios of an event-by-event net-proton multiplicity distribution, which are sensitive to the dynamical fluctuations due to the QCD critical point. For this study, we assume that the individual event-by-event proton and antiproton multiplicity distributions follow Poisson, negative binomial, or binomial distributions. We observe a significant effect in cumulants and their ratios of net-proton multiplicity distributions due to pile-up events, particularly at lower energies. It might be crucial to estimate the fraction of pile-up events in the data sample while interpreting the experimental observable for the critical point.

  17. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  18. Polarization in Quarkonium Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russ, James S.

    Production mechanisms for quarkonium states in hadronic collisions remain difficult to understand. The decay angular distributions of J/more » $$\\psi$$ or $$\\Upsilon(nS)$$ states into $$\\mu^+ \\mu^-$$ final states are sensitive to the matrix elements in the production process and provide a unique tool to evaluate different models. This talk will focus on new results for the spin alignment of $$\\Upsilon(nS)$$ states produced in $$p\\bar{p}$$ collisions at $$\\sqrt{s}$$ = 1.96 TeV using the CDF II detector at the Fermilab Tevatron. The data sample corresponds to an integrated luminosity of 6.7 fb$$^{-1}$$. The angular distributions are analyzed as functions of the transverse momentum of the dimuon final state in both the Collins-Soper and the s-channel helicity frames using a unique data-driven background determination method. Consistency of the analysis is checked by comparing frame-invariant quantities derived from parametrizations of the angular distributions measured in each choice of reference frame. This analysis is the first to quantify the complete three-dimensional angular distribution of $$\\Upsilon(1S), \\Upsilon(2S)$$ and $$\\Upsilon(3S)$$ decays. The decays are nearly isotropic in all frames, even when produced with large transverse momentum.« less

  19. Hydrologic sensitivity of headwater catchments to climate and landscape variability

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; Wagener, Thorsten; McGlynn, Brian; Nippgen, Fabian; Jencso, Kelsey

    2013-04-01

    Headwater streams cumulatively represent an extensive portion of the United States stream network, yet remain largely unmonitored and unmapped. As such, we have limited understanding of how these systems will respond to change, knowledge that is important for preserving these unique ecosystems, the services they provide, and the biodiversity they support. We compare responses across five adjacent headwater catchments located in Tenderfoot Creek Experimental Forest in Montana, USA, to understand how local differences may affect the sensitivity of headwaters to change. We utilize global, variance-based sensitivity analysis to understand which aspects of the physical system (e.g., vegetation, topography, geology) control the variability in hydrologic behavior across these basins, and how this varies as a function of time (and therefore climate). Basin fluxes and storages, including evapotranspiration, snow water equivalent and melt, soil moisture and streamflow, are simulated using the Distributed Hydrology-Vegetation-Soil Model (DHSVM). Sensitivity analysis is applied to quantify the importance of different physical parameters to the spatial and temporal variability of different water balance components, allowing us to map similarities and differences in these controls through space and time. Our results show how catchment influences on fluxes vary across seasons (thus providing insight into transferability of knowledge in time), and how they vary across catchments with different physical characteristics (providing insight into transferability in space).

  20. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  1. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Co-occurrence patterns of trees along macro-climatic gradients and their potential influence on the present and future distribution of Fagus sylvatica L.

    USGS Publications Warehouse

    Meier, E.S.; Edwards, T.C.; Kienast, Felix; Dobbertin, M.; Zimmermann, N.E.

    2011-01-01

    Aim During recent and future climate change, shifts in large-scale species ranges are expected due to the hypothesized major role of climatic factors in regulating species distributions. The stress-gradient hypothesis suggests that biotic interactions may act as major constraints on species distributions under more favourable growing conditions, while climatic constraints may dominate under unfavourable conditions. We tested this hypothesis for one focal tree species having three major competitors using broad-scale environmental data. We evaluated the variation of species co-occurrence patterns in climate space and estimated the influence of these patterns on the distribution of the focal species for current and projected future climates.Location Europe.Methods We used ICP Forest Level 1 data as well as climatic, topographic and edaphic variables. First, correlations between the relative abundance of European beech (Fagus sylvatica) and three major competitor species (Picea abies, Pinus sylvestris and Quercus robur) were analysed in environmental space, and then projected to geographic space. Second, a sensitivity analysis was performed using generalized additive models (GAM) to evaluate where and how much the predicted F. sylvatica distribution varied under current and future climates if potential competitor species were included or excluded. We evaluated if these areas coincide with current species co-occurrence patterns.Results Correlation analyses supported the stress-gradient hypothesis: towards favourable growing conditions of F. sylvatica, its abundance was strongly linked to the abundance of its competitors, while this link weakened towards unfavourable growing conditions, with stronger correlations in the south and at low elevations than in the north and at high elevations. The sensitivity analysis showed a potential spatial segregation of species with changing climate and a pronounced shift of zones where co-occurrence patterns may play a major role.Main conclusions Our results demonstrate the importance of species co-occurrence patterns for calibrating improved species distribution models for use in projections of climate effects. The correlation approach is able to localize European areas where inclusion of biotic predictors is effective. The climate-induced spatial segregation of the major tree species could have ecological and economic consequences. ?? 2010 Blackwell Publishing Ltd.

  3. Hexicon 2: Automated Processing of Hydrogen-Deuterium Exchange Mass Spectrometry Data with Improved Deuteration Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  4. Hexicon 2: automated processing of hydrogen-deuterium exchange mass spectrometry data with improved deuteration distribution estimation.

    PubMed

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  5. Multivariate pattern analysis of fMRI data reveals deficits in distributed representations in schizophrenia

    PubMed Central

    Yoon, Jong H.; Tamir, Diana; Minzenberg, Michael J.; Ragland, J. Daniel; Ursu, Stefan; Carter, Cameron S.

    2009-01-01

    Background Multivariate pattern analysis is an alternative method of analyzing fMRI data, which is capable of decoding distributed neural representations. We applied this method to test the hypothesis of the impairment in distributed representations in schizophrenia. We also compared the results of this method with traditional GLM-based univariate analysis. Methods 19 schizophrenia and 15 control subjects viewed two runs of stimuli--exemplars of faces, scenes, objects, and scrambled images. To verify engagement with stimuli, subjects completed a 1-back matching task. A multi-voxel pattern classifier was trained to identify category-specific activity patterns on one run of fMRI data. Classification testing was conducted on the remaining run. Correlation of voxel-wise activity across runs evaluated variance over time in activity patterns. Results Patients performed the task less accurately. This group difference was reflected in the pattern analysis results with diminished classification accuracy in patients compared to controls, 59% and 72% respectively. In contrast, there was no group difference in GLM-based univariate measures. In both groups, classification accuracy was significantly correlated with behavioral measures. Both groups showed highly significant correlation between inter-run correlations and classification accuracy. Conclusions Distributed representations of visual objects are impaired in schizophrenia. This impairment is correlated with diminished task performance, suggesting that decreased integrity of cortical activity patterns is reflected in impaired behavior. Comparisons with univariate results suggest greater sensitivity of pattern analysis in detecting group differences in neural activity and reduced likelihood of non-specific factors driving these results. PMID:18822407

  6. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  7. Climate threats on growth of rear-edge European beech peripheral populations in Spain.

    PubMed

    Dorado-Liñán, I; Akhmetzyanov, L; Menzel, A

    2017-12-01

    European beech (Fagus sylvatica L.) forests in the Iberian Peninsula are a clear example of a temperate forest tree species at the rear edge of its large distribution area in Europe. The expected drier and warmer climate may alter tree growth and species distribution. Consequently, the peripheral populations will most likely be the most threatened ones. Four peripheral beech forests in the Iberian Peninsula were studied in order to assess the climate factors influencing tree growth for the last six decades. The analyses included an individual tree approach in order to detect not only the changes in the sensitivity to climate but also the potential size-mediated sensitivity to climate. Our results revealed a dominant influence of previous and current year summer on tree growth during the last six decades, although the analysis in two equally long periods unveiled changes and shifts in tree sensitivity to climate. The individual tree approach showed that those changes in tree response to climate are not size dependent in most of the cases. We observed a reduced negative effect of warmer winter temperatures at some sites and a generalized increased influence of previous year climatic conditions on current year tree growth. These results highlight the crucial role played by carryover effects and stored carbohydrates for future tree growth and species persistence.

  8. Climate threats on growth of rear-edge European beech peripheral populations in Spain

    NASA Astrophysics Data System (ADS)

    Dorado-Liñán, I.; Akhmetzyanov, L.; Menzel, A.

    2017-12-01

    European beech ( Fagus sylvatica L.) forests in the Iberian Peninsula are a clear example of a temperate forest tree species at the rear edge of its large distribution area in Europe. The expected drier and warmer climate may alter tree growth and species distribution. Consequently, the peripheral populations will most likely be the most threatened ones. Four peripheral beech forests in the Iberian Peninsula were studied in order to assess the climate factors influencing tree growth for the last six decades. The analyses included an individual tree approach in order to detect not only the changes in the sensitivity to climate but also the potential size-mediated sensitivity to climate. Our results revealed a dominant influence of previous and current year summer on tree growth during the last six decades, although the analysis in two equally long periods unveiled changes and shifts in tree sensitivity to climate. The individual tree approach showed that those changes in tree response to climate are not size dependent in most of the cases. We observed a reduced negative effect of warmer winter temperatures at some sites and a generalized increased influence of previous year climatic conditions on current year tree growth. These results highlight the crucial role played by carryover effects and stored carbohydrates for future tree growth and species persistence.

  9. Drought disaster vulnerability mapping of agricultural sector in Bringin District, Semarang Regency

    NASA Astrophysics Data System (ADS)

    Lestari, D. R.; Pigawati, B.

    2018-02-01

    Agriculture sector is a sector that is directly affected by drought. The phenomenon of drought disaster on agriculture sector has occurred in Semarang regency. One of districts in Semarang which is affected by drought is Bringin district. Bringin district is a productive agricultural area. However, the district experienced the most severe drought in 2015. The question research of this study is, “How is the spatial distribution of drought vulnerability on agriculture sector in Bringin district, Semarang regency?” The purpose of this study is to determine the spatial distribution of drought vulnerability on agriculture sector to village units in Bringin district. This study investigated drought vulnerability based on Intergovernmental Panel on Climate Change (IPCC) by analyzing exposure, sensitivity, and adaptive capacity through mapping process. This study used quantitative approach. There were formulation analysis, scoring analysis, and overlay analysis. Drought vulnerability on agriculture sector in Bringin district was divided into three categories: low, medium, and high.

  10. Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia

    A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.

  11. Detection of the quantity of kinesin and microgravity-sensitive kinesin genes in rat bone marrow stromal cells grown in a simulated microgravity environment

    NASA Astrophysics Data System (ADS)

    Ni, Chengzhi; Wang, Chunyan; Li, Yuan; Li, Yinghui; Dai, Zhongquan; Zhao, Dongming; Sun, Hongyi; Wu, Bin

    2011-06-01

    Kinesin and kinesin-like proteins (KLPs) constitute a superfamily of microtubule motor proteins found in all eukaryotic organisms. Members of the kinesin superfamily are known to play important roles in many fundamental cellular and developmental processes. To date, few published studies have reported on the effects of microgravity on kinesin expression. In this paper, we describe the expression pattern and microgravity-sensitive genes of kinesin in rat bone marrow stromal cells cultured in a ground-based rotating bioreactor. The quantity of kinesin under the clinorotation condition was examined by immunoblot analysis with anti-kinesin. Furthermore, the distribution of kinesin at various times during clinorotation was determined by dual immunostaining, using anti-kinesin monoclonal antibody or anti-β-tubulin monoclonal antibody. In terms of kinesin quantity, we found that the ratios of the amounts of clinorotated/stationary KLPs decreased from clinorotation day 5 to day 10, although it increased on days 2 and 3. Immunofluorescence analysis revealed that kinesin in the nucleus was the first to be affected by simulated microgravity, following the kinesin at the periphery that was affected at various times during clinorotation. Real-time RT-PCR analysis of kinesin mRNA expression was performed and led to the identification of 3 microgravity-sensitive kinesin genes: KIF9, KIFC1, and KIF21A. Our results suggest that kinesin has a distinct expression pattern, and the identification of microgravity-sensitive kinesin genes offers insight into fundamental cell biology.

  12. BKCa currents are enriched in a subpopulation of adult rat cutaneous nociceptive dorsal root ganglion neurons

    PubMed Central

    Zhang, Xiu-Lin; Mok, Lee-Peng; Katz, Elizabeth J; Gold, Michael S.

    2010-01-01

    The biophysical properties and distribution of voltage-dependent, Ca2+-modulated K+ (BKCa) currents among subpopulations of acutely dissociated DiI labeled cutaneous sensory neurons from the adult rat were characterized with whole cell patch clamp techniques. BKCa currents were isolated from total K+ current with iberiotoxin, charybdotoxin, or paxilline. There was considerable variability in biophysical properties of BKCa currents. There was also variability in the distribution of BKCa current among subpopulations of cutaneous DRG neurons. While present in each of the subpopulations defined by cell body size, IB4 binding or capsaicin sensitivity, BKCa current was present in vast majority (>90%) of small diameter IB4+ neurons but was present in only a minority of neurons in subpopulations defined by other criteria (i.e., small diameter IB4−). Current clamp analysis indicated that in IB4+ neurons, BKCa currents contribute to the repolarization of the action potential and adaptation in response to sustained membrane depolarization, while playing little role in the determination of action potential threshold. RT-PCR analysis of mRNA collected from whole DRG revealed the presence of multiple splice variants of the BKCa channel α-subunit, rslo and all 4 of the accessory β subunits, suggesting that heterogeneity in the biophysical and pharmacological properties of BKCa current in cutaneous neurons, reflects, at least in part, the differential distribution of splice variants and/or β subunits. Because even a small decrease in BKCa current appears to have a dramatic influence on excitability, modulation of this current may contribute to sensitization of nociceptive afferents observed following tissue injury. PMID:20105244

  13. Topological signatures of interstellar magnetic fields - I. Betti numbers and persistence diagrams

    NASA Astrophysics Data System (ADS)

    Makarenko, Irina; Shukurov, Anvar; Henderson, Robin; Rodrigues, Luiz F. S.; Bushby, Paul; Fletcher, Andrew

    2018-04-01

    The interstellar medium (ISM) is a magnetized system in which transonic or supersonic turbulence is driven by supernova explosions. This leads to the production of intermittent, filamentary structures in the ISM gas density, whilst the associated dynamo action also produces intermittent magnetic fields. The traditional theory of random functions, restricted to second-order statistical moments (or power spectra), does not adequately describe such systems. We apply topological data analysis (TDA), sensitive to all statistical moments and independent of the assumption of Gaussian statistics, to the gas density fluctuations in a magnetohydrodynamic simulation of the multiphase ISM. This simulation admits dynamo action, so produces physically realistic magnetic fields. The topology of the gas distribution, with and without magnetic fields, is quantified in terms of Betti numbers and persistence diagrams. Like the more standard correlation analysis, TDA shows that the ISM gas density is sensitive to the presence of magnetic fields. However, TDA gives us important additional information that cannot be obtained from correlation functions. In particular, the Betti numbers per correlation cell are shown to be physically informative. Magnetic fields make the ISM more homogeneous, reducing the abundance of both isolated gas clouds and cavities, with a stronger effect on the cavities. Remarkably, the modification of the gas distribution by magnetic fields is captured by the Betti numbers even in regions more than 300 pc from the mid-plane, where the magnetic field is weaker and correlation analysis fails to detect any signatures of magnetic effects.

  14. Spatial analysis and health risk assessment of heavy metals concentration in drinking water resources.

    PubMed

    Fallahzadeh, Reza Ali; Ghaneian, Mohammad Taghi; Miri, Mohammad; Dashti, Mohamad Mehdi

    2017-11-01

    The heavy metals available in drinking water can be considered as a threat to human health. Oncogenic risk of such metals is proven in several studies. Present study aimed to investigate concentration of the heavy metals including As, Cd, Cr, Cu, Fe, Hg, Mn, Ni, Pb, and Zn in 39 water supply wells and 5 water reservoirs within the cities Ardakan, Meibod, Abarkouh, Bafgh, and Bahabad. The spatial distribution of the concentration was carried out by the software ArcGIS. Such simulations as non-carcinogenic hazard and lifetime cancer risk were conducted for lead and nickel using Monte Carlo technique. The sensitivity analysis was carried out to find the most important and effective parameters on risk assessment. The results indicated that concentration of all metals in 39 wells (except iron in 3 cases) reached the levels mentioned in EPA, World Health Organization, and Pollution Control Department standards. Based on the spatial distribution results at all studied regions, the highest concentrations of metals were derived, respectively, for iron and zinc. Calculated HQ values for non-carcinogenic hazard indicated a reasonable risk. Average lifetime cancer risks for the lead in Ardakan and nickel in Meibod and Bahabad were shown to be 1.09 × 10 -3 , 1.67 × 10 -1 , and 2 × 10 -1 , respectively, demonstrating high carcinogenic risk compared to similar standards and studies. The sensitivity analysis suggests high impact of concentration and BW in carcinogenic risk.

  15. Negative measurement sensitivity values of planar capacitive imaging probes

    NASA Astrophysics Data System (ADS)

    Yin, Xiaokang; Chen, Guoming; Li, Wei; Hutchins, David

    2014-02-01

    The measurement sensitivity distribution of planar capacitive imaging (CI) probes describes how effectively each region in the sensing area is contributing to the measured charge signal on the sensing electrode. It can be used to determine the imaging ability of a CI probe. It is found in previous work that, there are regions in the sensing area where the change of the charge output and the change of targeting physical parameter are of opposite trends. This opposite correlation implies that the measurement sensitivity values in such regions are negative. In this work, the cause of negative sensitivity is discussed. Experiments are also designed and performed so as to verify the existence of negative sensitivity and study the factors that may affect the negative sensitivity distributions.

  16. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    PubMed Central

    Irei, Satoshi

    2016-01-01

    Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511

  17. Thermal Sensitive Foils in Physics Experiments

    ERIC Educational Resources Information Center

    Bochnícek, Zdenek; Konecný, Pavel

    2014-01-01

    The paper describes a set of physics demonstration experiments where thermal sensitive foils are used for the detection of the two dimensional distribution of temperature. The method is used for the demonstration of thermal conductivity, temperature change in adiabatic processes, distribution of electromagnetic radiation in a microwave oven and…

  18. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  19. Waiting time distribution revealing the internal spin dynamics in a double quantum dot

    NASA Astrophysics Data System (ADS)

    Ptaszyński, Krzysztof

    2017-07-01

    Waiting time distribution and the zero-frequency full counting statistics of unidirectional electron transport through a double quantum dot molecule attached to spin-polarized leads are analyzed using the quantum master equation. The waiting time distribution exhibits a nontrivial dependence on the value of the exchange coupling between the dots and the gradient of the applied magnetic field, which reveals the oscillations between the spin states of the molecule. The zero-frequency full counting statistics, on the other hand, is independent of the aforementioned quantities, thus giving no insight into the internal dynamics. The fact that the waiting time distribution and the zero-frequency full counting statistics give a nonequivalent information is associated with two factors. Firstly, it can be explained by the sensitivity to different timescales of the dynamics of the system. Secondly, it is associated with the presence of the correlation between subsequent waiting times, which makes the renewal theory, relating the full counting statistics and the waiting time distribution, no longer applicable. The study highlights the particular usefulness of the waiting time distribution for the analysis of the internal dynamics of mesoscopic systems.

  20. Comparable Analysis of the Distribution Functions of Runup Heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku Area

    NASA Astrophysics Data System (ADS)

    Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.

    2012-04-01

    Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  1. Effects of variations of stage and flux at different frequencies on the estimates using river stage tomography

    NASA Astrophysics Data System (ADS)

    Wang, Y. L.; Yeh, T. C. J.; Wen, J. C.

    2017-12-01

    This study is to investigate the ability of river stage tomography to estimate the spatial distribution of hydraulic transmissivity (T), storage coefficient (S), and diffusivity (D) in groundwater basins using information of groundwater level variations induced by periodic variations of stream stage, and infiltrated flux from the stream boundary. In order to accomplish this objective, the sensitivity and correlation of groundwater heads with respect to the hydraulic properties is first conducted to investigate the spatial characteristics of groundwater level in response to the stream variations at different frequencies. Results of the analysis show that the spatial distributions of the sensitivity of heads at an observation well in response to periodic river stage variations are highly correlated despite different frequencies. On the other hand, the spatial patterns of the sensitivity of the observed head to river flux boundaries at different frequencies are different. Specifically, the observed head is highly correlated with T at the region between the stream and observation well when the high-frequency periodic flux is considered. On the other hand, it is highly correlated with T at the region between monitoring well and the boundary opposite to the stream when the low-frequency periodic flux is prescribed to the stream. We also find that the spatial distributions of the sensitivity of observed head to S variation are highly correlated with all frequencies in spite of heads or fluxes stream boundary. Subsequently, the differences of the spatial correlations of the observed heads to the hydraulic properties under the head and flux boundary conditions are further investigated by an inverse model (i.e., successive stochastic linear estimator). This investigation uses noise-free groundwater and stream data of a synthetic aquifer, where aquifer heterogeneity is known exactly. The ability of river stage tomography is then tested with these synthetic data sets to estimate T, S, and D distribution. The results reveal that boundary flux variations with different frequencies contain different information about the aquifer characteristics while the head boundary does not.

  2. Application of computer-aided diagnosis (CAD) in MR-mammography (MRM): do we really need whole lesion time curve distribution analysis?

    PubMed

    Baltzer, Pascal Andreas Thomas; Renz, Diane M; Kullnig, Petra E; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A

    2009-04-01

    The identification of the most suspect enhancing part of a lesion is regarded as a major diagnostic criterion in dynamic magnetic resonance mammography. Computer-aided diagnosis (CAD) software allows the semi-automatic analysis of the kinetic characteristics of complete enhancing lesions, providing additional information about lesion vasculature. The diagnostic value of this information has not yet been quantified. Consecutive patients from routine diagnostic studies (1.5 T, 0.1 mmol gadopentetate dimeglumine, dynamic gradient-echo sequences at 1-minute intervals) were analyzed prospectively using CAD. Dynamic sequences were processed and reduced to a parametric map. Curve types were classified by initial signal increase (not significant, intermediate, and strong) and the delayed time course of signal intensity (continuous, plateau, and washout). Lesion enhancement was measured using CAD. The most suspect curve, the curve-type distribution percentage, and combined dynamic data were compared. Statistical analysis included logistic regression analysis and receiver-operating characteristic analysis. Fifty-one patients with 46 malignant and 44 benign lesions were enrolled. On receiver-operating characteristic analysis, the most suspect curve showed diagnostic accuracy of 76.7 +/- 5%. In comparison, the curve-type distribution percentage demonstrated accuracy of 80.2 +/- 4.9%. Combined dynamic data had the highest diagnostic accuracy (84.3 +/- 4.2%). These differences did not achieve statistical significance. With appropriate cutoff values, sensitivity and specificity, respectively, were found to be 80.4% and 72.7% for the most suspect curve, 76.1% and 83.6% for the curve-type distribution percentage, and 78.3% and 84.5% for both parameters. The integration of whole-lesion dynamic data tends to improve specificity. However, no statistical significance backs up this finding.

  3. Probing CP violation in $$h\\rightarrow\\gamma\\gamma$$ with converted photons

    DOE PAGES

    Bishara, Fady; Grossman, Yuval; Harnik, Roni; ...

    2014-04-11

    We study Higgs diphoton decays, in which both photons undergo nuclear conversion to electron- positron pairs. The kinematic distribution of the two electron-positron pairs may be used to probe the CP violating (CPV) coupling of the Higgs to photons, that may be produced by new physics. Detecting CPV in this manner requires interference between the spin-polarized helicity amplitudes for both conversions. We derive leading order, analytic forms for these amplitudes. In turn, we obtain compact, leading-order expressions for the full process rate. While performing experiments involving photon conversions may be challenging, we use the results of our analysis to constructmore » experimental cuts on certain observables that may enhance sensitivity to CPV. We show that there exist regions of phase space on which sensitivity to CPV is of order unity. As a result, the statistical sensitivity of these cuts are verified numerically, using dedicated Monte-Carlo simulations.« less

  4. Securing Sensitive Flight and Engine Simulation Data Using Smart Card Technology

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2003-01-01

    NASA Glenn Research Center has developed a smart card prototype capable of encrypting and decrypting disk files required to run a distributed aerospace propulsion simulation. Triple Data Encryption Standard (3DES) encryption is used to secure the sensitive intellectual property on disk pre, during, and post simulation execution. The prototype operates as a secure system and maintains its authorized state by safely storing and permanently retaining the encryption keys only on the smart card. The prototype is capable of authenticating a single smart card user and includes pre simulation and post simulation tools for analysis and training purposes. The prototype's design is highly generic and can be used to protect any sensitive disk files with growth capability to urn multiple simulations. The NASA computer engineer developed the prototype on an interoperable programming environment to enable porting to other Numerical Propulsion System Simulation (NPSS) capable operating system environments.

  5. Design, characterization, and sensitivity of the supernova trigger system at Daya Bay

    NASA Astrophysics Data System (ADS)

    Wei, Hanyu; Lebanowski, Logan; Li, Fei; Wang, Zhe; Chen, Shaomin

    2016-02-01

    Providing an early warning of galactic supernova explosions from neutrino signals is important in studying supernova dynamics and neutrino physics. A dedicated supernova trigger system has been designed and installed in the data acquisition system at Daya Bay and integrated into the worldwide Supernova Early Warning System (SNEWS). Daya Bay's unique feature of eight identically-designed detectors deployed in three separate experimental halls makes the trigger system naturally robust against cosmogenic backgrounds, enabling a prompt analysis of online triggers and a tight control of the false-alert rate. The trigger system is estimated to be fully sensitive to 1987A-type supernova bursts throughout most of the Milky Way. The significant gain in sensitivity of the eight-detector configuration over a mass-equivalent single detector is also estimated. The experience of this online trigger system is applicable to future projects with spatially distributed detectors.

  6. Mercury speciation and subcellular distribution in experimentally dosed and wild birds.

    PubMed

    Perkins, Marie; Barst, Benjamin D; Hadrava, Justine; Basu, Niladri

    2017-12-01

    Many bird species are exposed to methylmercury (MeHg) at levels shown to cause sublethal effects. Although MeHg sensitivity and assimilation can vary among species and developmental stages, the underlying reasons (such as MeHg toxicokinetics) are poorly understood. We investigated Hg distribution at the tissue and cellular levels in birds by examining Hg speciation in blood, brain, and liver and Hg subcellular distribution in liver. We used MeHg egg injection of white leghorn chicken (Gallus gallus domesticus), sampled at 3 early developmental stages, and embryonic ring-billed gulls (Larus delawarensis) exposed to maternally deposited MeHg. The percentage of MeHg (relative to total Hg [THg]) in blood, brain, and liver ranged from 94 to 121%, indicating little MeHg demethylation. A liver subcellular partitioning procedure was used to determine how THg was distributed between potentially sensitive and detoxified compartments. The distributions of THg among subcellular fractions were similar among chicken time points, and between embryonic chicken and ring-billed gulls. A greater proportion of THg was associated with metal-sensitive fractions than detoxified fractions. Within the sensitive compartment, THg was found predominately in heat-denatured proteins (∼42-46%), followed by mitochondria (∼15-18%). A low rate of MeHg demethylation and high proportion of THg in metal-sensitive subcellular fractions further indicates that embryonic and hatchling time points are Hg-sensitive developmental stages, although further work is needed across a range of additional species and life stages. Environ Toxicol Chem 2017;36:3289-3298. © 2017 SETAC. © 2017 SETAC.

  7. A determination of the fragmentation functions of u-quarks into charged pions

    NASA Astrophysics Data System (ADS)

    Aubert, J. J.; Bassompierre, G.; Becks, K. H.; Benchouk, C.; Best, C.; Böhm, E.; De Bouard, X.; Brasse, F. W.; Broll, C.; Brown, S.; Carr, J.; Clifft, R.; Cobb, J. H.; Coignet, G.; Combley, F.; Court, G. R.; D'Agostini, G.; Dau, W. D.; Davies, J. K.; Déclais, Y.; Dosselli, U.; Drees, J.; Edwards, A.; Edwards, M.; Favier, J.; Ferrero, M. I.; Flauger, W.; Forsbach, H.; Gabathuler, E.; Gamet, R.; Gayler, J.; Gerhardt, V.; Gössling, C.; Haas, J.; Hamacher, K.; Hayman, P.; Henckes, M.; Korbel, V.; Korzen, B.; Landgraf, U.; Leenen, M.; Maire, M.; Mohr, W.; Montgomery, H. E.; Moser, K.; Mount, R. P.; Nagy, E.; Nassalski, J.; Norton, P. R.; McNicholas, J.; Osborne, A. M.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pietrzyk, U.; Rith, K.; Schneegans, M.; Schneider, A.; Sloan, T.; Stier, H. E.; Stockhausen, W.; Thénard, J. M.; Thompson, J. C.; Urban, L.; Villers, M.; Wahlen, H.; Whalley, M.; Williams, D.; Williams, W. S. C.; Williamson, J.; Wimpenny, S. J.; European Muon Collaboration (EMC)

    1985-10-01

    The fragmentation functions of u-quarks into positive and negative pions are determined from an analysis of identified pions produced in deep inelastic muon-deuterium scattering. The method adopted is not sensitive to the knowledge of the primary quark distribution functions. The fragmentation of u quarks to positive pions is found to fall less steeply in z than that to negative pions as expected in the quark parton model.

  8. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-01-01

    Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest changes (<5%) in global mean OH concentrations. J(O1D), J(NO2) and OH3 concentrations show the strongest sensitivity for small CODs and become insensitive at large CODs due to saturation effects. Caution should be exercised not to use in photochemical models a value for cloud single scattering albedo lower than about 0.999 in order to be consistent with the current knowledge of cloud absorption at the ultraviolet wavelengths.

  9. Distribution and Diversity of Symbiotic Thermophiles, Symbiobacterium thermophilum and Related Bacteria, in Natural Environments

    PubMed Central

    Ueda, Kenji; Ohno, Michiyo; Yamamoto, Kaori; Nara, Hanae; Mori, Yujiro; Shimada, Masafumi; Hayashi, Masahiko; Oida, Hanako; Terashima, Yuko; Nagata, Mitsuyo; Beppu, Teruhiko

    2001-01-01

    Symbiobacterium thermophilum is a tryptophanase-positive thermophile which shows normal growth only in coculture with its supporting bacteria. Analysis of the 16S rRNA gene (rDNA) indicated that the bacterium belongs to a novel phylogenetic branch at the outermost position of the gram-positive bacterial group without clustering to any other known genus. Here we describe the distribution and diversity of S. thermophilum and related bacteria in the environment. Thermostable tryptophanase activity and amplification of the specific 16S rDNA fragment were effectively employed to detect the presence of Symbiobacterium. Enrichment with kanamycin raised detection sensitivity. Mixed cultures of thermophiles containing Symbiobacterium species were frequently obtained from compost, soil, animal feces, and contents in the intestinal tracts, as well as feeds. Phylogenetic analysis and denaturing gradient gel electrophoresis of the specific 16S rDNA amplicons revealed a diversity of this group of bacteria in the environment. PMID:11525967

  10. Classification of Stellar Spectra with Fuzzy Minimum Within-Class Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhong-bao, Liu; Wen-ai, Song; Jing, Zhang; Wen-juan, Zhao

    2017-06-01

    Classification is one of the important tasks in astronomy, especially in spectra analysis. Support Vector Machine (SVM) is a typical classification method, which is widely used in spectra classification. Although it performs well in practice, its classification accuracies can not be greatly improved because of two limitations. One is it does not take the distribution of the classes into consideration. The other is it is sensitive to noise. In order to solve the above problems, inspired by the maximization of the Fisher's Discriminant Analysis (FDA) and the SVM separability constraints, fuzzy minimum within-class support vector machine (FMWSVM) is proposed in this paper. In FMWSVM, the distribution of the classes is reflected by the within-class scatter in FDA and the fuzzy membership function is introduced to decrease the influence of the noise. The comparative experiments with SVM on the SDSS datasets verify the effectiveness of the proposed classifier FMWSVM.

  11. VISUALIZATION OF TISSUE DISTRIBUTION AND METABOLISM OF BENZO[A]PYRENE IN EARLY EMBRYONIC MEDAKA (ORYZIAS LATIPES)

    EPA Science Inventory

    Fish early life stages are highly sensitive to exposure to persistent bioaccumulative toxicants (PBTs). The factors that contribute to this are unknown, but may include the distribution of PBTs to sensitive tissues during critical stages of development. Multiphoton laser scannin...

  12. Optimal Interpolation scheme to generate reference crop evapotranspiration

    NASA Astrophysics Data System (ADS)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  13. The Node Deployment of Intelligent Sensor Networks Based on the Spatial Difference of Farmland Soil

    PubMed Central

    Liu, Naisen; Cao, Weixing; Zhu, Yan; Zhang, Jingchao; Pang, Fangrong; Ni, Jun

    2015-01-01

    Considering that agricultural production is characterized by vast areas, scattered fields and long crop growth cycles, intelligent wireless sensor networks (WSNs) are suitable for monitoring crop growth information. Cost and coverage are the most key indexes for WSN applications. The differences in crop conditions are influenced by the spatial distribution of soil nutrients. If the nutrients are distributed evenly, the crop conditions are expected to be approximately uniform with little difference; on the contrary, there will be great differences in crop conditions. In accordance with the differences in the spatial distribution of soil information in farmland, fuzzy c-means clustering was applied to divide the farmland into several areas, where the soil fertility of each area is nearly uniform. Then the crop growth information in the area could be monitored with complete coverage by deploying a sensor node there, which could greatly decrease the deployed sensor nodes. Moreover, in order to accurately judge the optimal cluster number of fuzzy c-means clustering, a discriminant function for Normalized Intra-Cluster Coefficient of Variation (NICCV) was established. The sensitivity analysis indicates that NICCV is insensitive to the fuzzy weighting exponent, but it shows a strong sensitivity to the number of clusters. PMID:26569243

  14. Measurement of dispersion of nanoparticles in a dense suspension by high-sensitivity low-coherence dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Ishii, Katsuhiro; Nakamura, Sohichiro; Sato, Yuki

    2014-08-01

    High-sensitivity low-coherence DLS apply to measurement of particle size distribution of pigments suspended in a ink. This method can be apply to extremely dense and turbid media without dilution. We show the temporal variation of particle size distribution of thixotropy and sedimentary pigments due to aggregation, agglomerate, and sedimentation. Moreover, we demonstrate the influence of dilution of ink to particle size distribution.

  15. Sensitivity of the normalized difference vegetation index to subpixel canopy cover, soil albedo, and pixel scale

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.

    1990-01-01

    An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.

  16. Investigation of Hydrogen Embrittlement Susceptibility of X80 Weld Joints by Thermal Simulation

    NASA Astrophysics Data System (ADS)

    Peng, Huangtao; An, Teng; Zheng, Shuqi; Luo, Bingwei; Wang, Siyu; Zhang, Shuai

    2018-05-01

    The objective of this study was to investigate the hydrogen embrittlement (HE) susceptibility and influence mechanism of X80 weld joints. Slow strain rate testing (SSRT) under in situ H-charging, combined with microstructure and fracture analysis, was performed on the base metal (BM), weld metal (WM), thermally simulated fine-grained heat-affected zone (FGHAZ) and coarse-grained heat-affected zone (CGHAZ). Results showed that the WM and simulated HAZ had a greater degree of high local strain distribution than the BM; compared to the CGHAZ, the FGHAZ had lower microhardness and more uniformly distributed stress. SSRT results showed that the weld joint was highly sensitive to HE; the HE index decreased in the following sequence: FGHAZ, WM, CGHAZ and BM. The effect of the microstructure on HE was mainly reflected in microstructure, local stress distribution and microhardness.

  17. Life cycle design metrics for energy generation technologies: Method, data, and case study

    NASA Astrophysics Data System (ADS)

    Cooper, Joyce; Lee, Seung-Jin; Elter, John; Boussu, Jeff; Boman, Sarah

    A method to assist in the rapid preparation of Life Cycle Assessments of emerging energy generation technologies is presented and applied to distributed proton exchange membrane fuel cell systems. The method develops life cycle environmental design metrics and allows variations in hardware materials, transportation scenarios, assembly energy use, operating performance and consumables, and fuels and fuel production scenarios to be modeled and comparisons to competing systems to be made. Data and results are based on publicly available U.S. Life Cycle Assessment data sources and are formulated to allow the environmental impact weighting scheme to be specified. A case study evaluates improvements in efficiency and in materials recycling and compares distributed proton exchange membrane fuel cell systems to other distributed generation options. The results reveal the importance of sensitivity analysis and system efficiency in interpreting case studies.

  18. [Evaluation of land resources carrying capacity of development zone based on planning environment impact assessment].

    PubMed

    Fu, Shi-Feng; Zhang, Ping; Jiang, Jin-Long

    2012-02-01

    Assessment of land resources carrying capacity is the key point of planning environment impact assessment and the main foundation to determine whether the planning could be implemented or not. With the help of the space analysis function of Geographic Information System, and selecting altitude, slope, land use type, distance from resident land, distance from main traffic roads, and distance from environmentally sensitive area as the sensitive factors, a comprehensive assessment on the ecological sensitivity and its spatial distribution in Zhangzhou Merchants Economic and Technological Development Zone, Fujian Province of East China was conducted, and the assessment results were combined with the planning land layout diagram for the ecological suitability analysis. In the Development Zone, 84.0% of resident land, 93.1% of industrial land, 86.0% of traffic land, and 76. 0% of other constructive lands in planning were located in insensitive and gently sensitive areas, and thus, the implement of the land use planning generally had little impact on the ecological environment, and the land resources in the planning area was able to meet the land use demand. The assessment of the population carrying capacity with ecological land as the limiting factor indicated that in considering the highly sensitive area and 60% of the moderately sensitive area as ecological land, the population within the Zone in the planning could reach 240000, and the available land area per capita could be 134.0 m2. Such a planned population scale is appropriate, according to the related standards of constructive land.

  19. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  20. The difference between temperate and tropical saltwater species' acute sensitivity to chemicals is relatively small.

    PubMed

    Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y

    2014-06-01

    Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Fault detection and diagnosis for non-Gaussian stochastic distribution systems with time delays via RBF neural networks.

    PubMed

    Yi, Qu; Zhan-ming, Li; Er-chao, Li

    2012-11-01

    A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. On hydrologic similarity: A dimensionless flood frequency model using a generalized geomorphologic unit hydrograph and partial area runoff generation

    NASA Technical Reports Server (NTRS)

    Sivapalan, Murugesu; Wood, Eric F.; Beven, Keith J.

    1993-01-01

    One of the shortcomings of the original theory of the geomorphologic unit hydrograph (GUH) is that it assumes that runoff is generated uniformly from the entire catchment area. It is now recognized that in many catchments much of the runoff during storm events is produced on partial areas which usually form on narrow bands along the stream network. A storm response model that includes runoff generation on partial areas by both Hortonian and Dunne mechanisms was recently developed by the authors. In this paper a methodology for integrating this partial area runoff generation model with the GUH-based runoff routing model is presented; this leads to a generalized GUH. The generalized GUH and the storm response model are then used to estimate physically based flood frequency distributions. In most previous work the initial moisture state of the catchment had been assumed to be constant for all the storms. In this paper we relax this assumption and allow the initial moisture conditions to vary between storms. The resulting flood frequency distributions are cast in a scaled dimensionless framework where issues such as catchment scale and similarity can be conveniently addressed. A number of experiments are performed to study the sensitivity of the flood frequency response to some of the 'similarity' parameters identified in this formulation. The results indicate that one of the most important components of the derived flood frequency model relates to the specification of processes within the runoff generation model; specifically the inclusion of both saturation excess and Horton infiltration excess runoff production mechanisms. The dominance of these mechanisms over different return periods of the flood frequency distribution can significantly affect the distributional shape and confidence limits about the distribution. Comparisons with observed flood distributions seem to indicate that such mixed runoff production mechanisms influence flood distribution shape. The sensitivity analysis also indicated that the incorporation of basin and rainfall storm scale also greatly influences the distributional shape of the flood frequency curve.

  3. [Retrieval of Copper Pollution Information from Hyperspectral Satellite Data in a Vegetation Cover Mining Area].

    PubMed

    Qu, Yong-hua; Jiao, Si-hong; Liu, Su-hong; Zhu, Ye-qing

    2015-11-01

    Heavy metal mining activities have caused the complex influence on the ecological environment of the mining regions. For example, a large amount of acidic waste water containing heavy metal ions have be produced in the process of copper mining which can bring serious pollution to the ecological environment of the region. In the previous research work, bare soil is mainly taken as the research target when monitoring environmental pollution, and thus the effects of land surface vegetation have been ignored. It is well known that vegetation condition is one of the most important indictors to reflect the ecological change in a certain region and there is a significant linkage between the vegetation spectral characteristics and the heavy metal when the vegetation is effected by the heavy metal pollution. It means the vegetation is sensitive to heavy metal pollution by their physiological behaviors in response to the physiological ecology change of their growing environment. The conventional methods, which often rely on large amounts of field survey data and laboratorial chemical analysis, are time consuming and costing a lot of material resources. The spectrum analysis method using remote sensing technology can acquire the information of the heavy mental content in the vegetation without touching it. However, the retrieval of that information from the hyperspectral data is not an easy job due to the difficulty in figuring out the specific band, which is sensitive to the specific heavy metal, from a huge number of hyperspectral bands. Thus the selection of the sensitive band is the key of the spectrum analysis method. This paper proposed a statistical analysis method to find the feature band sensitive to heavy metal ion from the hyperspectral data and to then retrieve the metal content using the field survey data and the hyperspectral images from China Environment Satellite HJ-1. This method selected copper ion content in the leaves as the indicator of copper pollution level, using stepwise multiple linear regression and cross validation on the dataset which is consisting of 44 groups of copper ion content information in the polluted vegetation leaves from Dexing Copper Mine in Jiangxi Province to build up a statistical model by also incorporating the HJ-1 satellite images. This model was then used to estimate the copper content distribution over the whole research area at Dexing Copper Mine. The result has shown that there is strong statistical significance of the model which revealed the most sensitive waveband to copper ion is located at 516 nm. The distribution map illustrated that the copper ion content is generally in the range of 0-130 mg · kg⁻¹ in the vegetation covering area at Dexing Copper Mine and the most seriously polluted area is located at the South-east corner of Dexing City as well as the mining spots with a higher value between 80 and 100 mg · kg⁻¹. This result is consistent with the ground observation experiment data. The distribution map can certainly provide some important basic data on the copper pollution monitoring and treatment.

  4. Sensitivity of IFM/GAIM-GM Model to High-cadence Kp and F10.7 Input

    DTIC Science & Technology

    2014-03-27

    2014 DISTRIBUTION STATEMENT A . APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED AFIT-ENP-14- M -17 SENSITIVITY OF IFM ...and is not subject to copyright protection in the United States. AFIT-ENP-14- M -17 SENSITIVITY OF IFM /GAIM-GM MODEL TO...observed data and ingests it into the IFM background ionosphere, which is highly dependent on Kp and F10.7. The Air Force Weather Agency typically uses a

  5. Mapping of polycrystalline films of biological fluids utilizing the Jones-matrix formalism

    NASA Astrophysics Data System (ADS)

    Ushenko, Vladimir A.; Dubolazov, Alexander V.; Pidkamin, Leonid Y.; Sakchnovsky, Michael Yu; Bodnar, Anna B.; Ushenko, Yuriy A.; Ushenko, Alexander G.; Bykov, Alexander; Meglinski, Igor

    2018-02-01

    Utilizing a polarized light approach, we reconstruct the spatial distribution of birefringence and optical activity in polycrystalline films of biological fluids. The Jones-matrix formalism is used for an accessible quantitative description of these types of optical anisotropy. We demonstrate that differentiation of polycrystalline films of biological fluids can be performed based on a statistical analysis of the distribution of rotation angles and phase shifts associated with the optical activity and birefringence, respectively. Finally, practical operational characteristics, such as sensitivity, specificity and accuracy of the Jones-matrix reconstruction of optical anisotropy, were identified with special emphasis on biomedical application, specifically for differentiation of bile films taken from healthy donors and from patients with cholelithiasis.

  6. Mueller-matrix mapping of biological tissues in differential diagnosis of optical anisotropy mechanisms of protein networks

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2015-03-01

    We report a model of Mueller-matrix description of optical anisotropy of protein networks in biological tissues with allowance for the linear birefringence and dichroism. The model is used to construct the reconstruction algorithms of coordinate distributions of phase shifts and the linear dichroism coefficient. In the statistical analysis of such distributions, we have found the objective criteria of differentiation between benign and malignant tissues of the female reproductive system. From the standpoint of evidence-based medicine, we have determined the operating characteristics (sensitivity, specificity and accuracy) of the Mueller-matrix reconstruction method of optical anisotropy parameters and demonstrated its effectiveness in the differentiation of benign and malignant tumours.

  7. Optimal reconstruction of historical water supply to a distribution system: A. Methodology.

    PubMed

    Aral, M M; Guan, J; Maslia, M L; Sautner, J B; Gillig, R E; Reyes, J J; Williams, R C

    2004-09-01

    The New Jersey Department of Health and Senior Services (NJDHSS), with support from the Agency for Toxic Substances and Disease Registry (ATSDR) conducted an epidemiological study of childhood leukaemia and nervous system cancers that occurred in the period 1979 through 1996 in Dover Township, Ocean County, New Jersey. The epidemiological study explored a wide variety of possible risk factors, including environmental exposures. ATSDR and NJDHSS determined that completed human exposure pathways to groundwater contaminants occurred in the past through private and community water supplies (i.e. the water distribution system serving the area). To investigate this exposure, a model of the water distribution system was developed and calibrated through an extensive field investigation. The components of this water distribution system, such as number of pipes, number of tanks, and number of supply wells in the network, changed significantly over a 35-year period (1962--1996), the time frame established for the epidemiological study. Data on the historical management of this system was limited. Thus, it was necessary to investigate alternative ways to reconstruct the operation of the system and test the sensitivity of the system to various alternative operations. Manual reconstruction of the historical water supply to the system in order to provide this sensitivity analysis was time-consuming and labour intensive, given the complexity of the system and the time constraints imposed on the study. To address these issues, the problem was formulated as an optimization problem, where it was assumed that the water distribution system was operated in an optimum manner at all times to satisfy the constraints in the system. The solution to the optimization problem provided the historical water supply strategy in a consistent manner for each month of the study period. The non-uniqueness of the selected historical water supply strategy was addressed by the formulation of a second model, which was based on the first solution. Numerous other sensitivity analyses were also conducted using these two models. Both models are solved using a two-stage progressive optimality algorithm along with genetic algorithms (GAs) and the EPANET2 water distribution network solver. This process reduced the required solution time and generated a historically consistent water supply strategy for the water distribution system.

  8. Gyroscopic sensing in the wings of the hawkmoth Manduca sexta: the role of sensor location and directional sensitivity.

    PubMed

    Hinson, Brian T; Morgansen, Kristi A

    2015-10-06

    The wings of the hawkmoth Manduca sexta are lined with mechanoreceptors called campaniform sensilla that encode wing deformations. During flight, the wings deform in response to a variety of stimuli, including inertial-elastic loads due to the wing flapping motion, aerodynamic loads, and exogenous inertial loads transmitted by disturbances. Because the wings are actuated, flexible structures, the strain-sensitive campaniform sensilla are capable of detecting inertial rotations and accelerations, allowing the wings to serve not only as a primary actuator, but also as a gyroscopic sensor for flight control. We study the gyroscopic sensing of the hawkmoth wings from a control theoretic perspective. Through the development of a low-order model of flexible wing flapping dynamics, and the use of nonlinear observability analysis, we show that the rotational acceleration inherent in wing flapping enables the wings to serve as gyroscopic sensors. We compute a measure of sensor fitness as a function of sensor location and directional sensitivity by using the simulation-based empirical observability Gramian. Our results indicate that gyroscopic information is encoded primarily through shear strain due to wing twisting, where inertial rotations cause detectable changes in pronation and supination timing and magnitude. We solve an observability-based optimal sensor placement problem to find the optimal configuration of strain sensor locations and directional sensitivities for detecting inertial rotations. The optimal sensor configuration shows parallels to the campaniform sensilla found on hawkmoth wings, with clusters of sensors near the wing root and wing tip. The optimal spatial distribution of strain directional sensitivity provides a hypothesis for how heterogeneity of campaniform sensilla may be distributed.

  9. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  11. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  12. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  13. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Predicting location-specific extreme coastal floods in the future climate by introducing a probabilistic method to calculate maximum elevation of the continuous water mass caused by a combination of water level variations and wind waves

    NASA Astrophysics Data System (ADS)

    Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu

    2017-04-01

    Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and underestimation of the combined effect of sea level variations and wind waves, and to help coastal infrastructure planning and support smooth and safe operation of coastal cities in a changing climate.

  15. Combined evaluation of grazing incidence X-ray fluorescence and X-ray reflectivity data for improved profiling of ultra-shallow depth distributions☆

    PubMed Central

    Ingerle, D.; Meirer, F.; Pepponi, G.; Demenev, E.; Giubertoni, D.; Wobrauschek, P.; Streli, C.

    2014-01-01

    The continuous downscaling of the process size for semiconductor devices pushes the junction depths and consequentially the implantation depths to the top few nanometers of the Si substrate. This motivates the need for sensitive methods capable of analyzing dopant distribution, total dose and possible impurities. X-ray techniques utilizing the external reflection of X-rays are very surface sensitive, hence providing a non-destructive tool for process analysis and control. X-ray reflectometry (XRR) is an established technique for the characterization of single- and multi-layered thin film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample, but does not provide direct information on the presence or distribution of chemical elements in the sample. Grazing Incidence XRF (GIXRF) measures the X-ray fluorescence induced by an X-ray beam incident under grazing angles. The resulting angle dependent intensity curves are correlated to the depth distribution and mass density of the elements in the sample. GIXRF provides information on contaminations, total implanted dose and to some extent on the depth of the dopant distribution, but is ambiguous with regard to the exact distribution function. Both techniques use similar measurement procedures and data evaluation strategies, i.e. optimization of a sample model by fitting measured and calculated angle curves. Moreover, the applied sample models can be derived from the same physical properties, like atomic scattering/form factors and elemental concentrations; a simultaneous analysis is therefore a straightforward approach. This combined analysis in turn reduces the uncertainties of the individual techniques, allowing a determination of dose and depth profile of the implanted elements with drastically increased confidence level. Silicon wafers implanted with Arsenic at different implantation energies were measured by XRR and GIXRF using a combined, simultaneous measurement and data evaluation procedure. The data were processed using a self-developed software package (JGIXA), designed for simultaneous fitting of GIXRF and XRR data. The results were compared with depth profiles obtained by Secondary Ion Mass Spectrometry (SIMS). PMID:25202165

  16. Epidemiology of pediatric nickel sensitivity: Retrospective review of North American Contact Dermatitis Group (NACDG) data 1994-2014.

    PubMed

    Warshaw, Erin M; Aschenbeck, Kelly A; DeKoven, Joel G; Maibach, Howard I; Taylor, James S; Sasseville, Denis; Belsito, Donald V; Fowler, Joseph F; Zug, Kathryn A; Zirwas, Matthew J; Fransway, Anthony F; DeLeo, Vincent A; Marks, James G; Pratt, Melanie D; Mathias, Toby

    2018-04-14

    Nickel is a common allergen responsible for allergic contact dermatitis. To characterize nickel sensitivity in children and compare pediatric cohorts (≤5, 6-12, and 13-18 years). Retrospective, cross-sectional analysis of 1894 pediatric patients patch tested by the North American Contact Dermatitis Group from 1994 to 2014. We evaluated demographics, rates of reaction to nickel, strength of nickel reactions, and nickel allergy sources. The frequency of nickel sensitivity was 23.7%. Children with nickel sensitivity were significantly less likely to be male (P < .0001; relative risk, 0.63; 95% confidence interval, 0.52-0.75) or have a history of allergic rhinitis (P = .0017; relative risk, 0.74; 95% confidence interval, 0.61-0.90) compared with those who were not nickel sensitive. In the nickel-sensitive cohort, the relative proportion of boys declined with age (44.8% for age ≤5, 36.6% for age 6-12, and 22.6% for age 13-18 years). The most common body site distribution for all age groups sensitive to nickel was scattered/generalized, indicating widespread dermatitis. Jewelry was the most common source associated with nickel sensitivity (36.4%). As a cross-sectional study, no long-term follow-up was available. Nickel sensitivity in children was common; the frequency was significantly higher in girls than in boys. Overall, sensitivity decreased with age. The most common source of nickel was jewelry. Published by Elsevier Inc.

  17. Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2003-01-01

    The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.

  18. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  19. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  20. The < ln A > study with the Muon tracking detector in the KASCADE-Grande experiment - comparison of hadronic interaction models

    NASA Astrophysics Data System (ADS)

    Łuczak, P.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Curcio, C.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.

    2015-08-01

    With the KASCADE-Grande Muon Tracking Detector it was possible to measure with high accuracy directions of EAS muons with energy above 0.8 GeV and up to 700 m distance from the shower centre. Reconstructed muon tracks allow investigation of muon pseudorapidity (η) distributions. These distributions are nearly identical to the pseudorapidity distributions of their parent mesons produced in hadronic interactions. Comparison of the η distributions from measured and simulated showers can be used to test the quality of the high energy hadronic interaction models. The pseudorapidity distributions reflect the longitudinal development of EAS and, as such, are sensitive to the mass of the cosmic ray primary particles. With various parameters of the η distribution, obtained from the Muon Tracking Detector data, it is possible to calculate the average logarithm of mass of the primary cosmic ray particles. The results of the < ln A > analysis in the primary energy range 1016 eV-1017 eV with the 1st quartile and the mean value of the distributions will be presented for the QGSJet-II-2, QGSJet-II-4, EPOS 1.99 and EPOS LHC models in combination with the FLUKA model.

  1. Three-dimensional synaptic analyses of mitral cell and external tufted cell dendrites in rat olfactory bulb glomeruli.

    PubMed

    Bourne, Jennifer N; Schoppa, Nathan E

    2017-02-15

    Recent studies have suggested that the two excitatory cell classes of the mammalian olfactory bulb, the mitral cells (MCs) and tufted cells (TCs), differ markedly in physiological responses. For example, TCs are more sensitive and broadly tuned to odors than MCs and also are much more sensitive to stimulation of olfactory sensory neurons (OSNs) in bulb slices. To examine the morphological bases for these differences, we performed quantitative ultrastructural analyses of glomeruli in rat olfactory bulb under conditions in which specific cells were labeled with biocytin and 3,3'-diaminobenzidine. Comparisons were made between MCs and external TCs (eTCs), which are a TC subtype in the glomerular layer with large, direct OSN signals and capable of mediating feedforward excitation of MCs. Three-dimensional analysis of labeled apical dendrites under an electron microscope revealed that MCs and eTCs in fact have similar densities of several chemical synapse types, including OSN inputs. OSN synapses also were distributed similarly, favoring a distal localization on both cells. Analysis of unlabeled putative MC dendrites further revealed gap junctions distributed uniformly along the apical dendrite and, on average, proximally with respect to OSN synapses. Our results suggest that the greater sensitivity of eTCs vs. MCs is due not to OSN synapse number or absolute location but rather to a conductance in the MC dendrite that is well positioned to attenuate excitatory signals passing to the cell soma. Functionally, such a mechanism could allow rapid and dynamic control of OSN-driven action potential firing in MCs through changes in gap junction properties. J. Comp. Neurol. 525:592-609, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Efficacy of Radiative Transfer Model Across Space, Time and Hydro-climates

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Neelam, M.

    2017-12-01

    The efficiency of radiative transfer model for better soil moisture retrievals is not yet clearly understood over natural systems with great variability and heterogeneity with respect to soil, land cover, topography, precipitation etc. However, this knowledge is important to direct and strategize future research direction and field campaigns. In this work, we present global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties on radiative transfer model (RTM) and to quantify climate-soil-vegetation interactions. A framework is proposed to understand soil moisture mechanisms underlying these interactions, and influence of these interactions on soil moisture retrieval accuracy. Soil moisture dynamics is observed to play a key role in variability of these interactions, i.e., it enhances both mean and variance of soil-vegetation coupling. The analysis is conducted for different support scales (Point Scale, 800 m, 1.6 km, 3.2 km, 6.4 km, 12.8 km, and 36 km), seasonality (time), hydro-climates, aggregation (scaling) methods and across Level I and Level II ecoregions of contiguous USA (CONUS). For undisturbed natural environments such as SGP'97 (Oklahoma, USA) and SMEX04 (Arizona, USA), the sensitivity of TB to land surface variables remain nearly uniform and are not influenced by extent, support scales or averaging method. On the contrary, for anthropogenically-manipulated environments such as SMEX02 (Iowa, USA) and SMAPVEX12 (Winnipeg, Canada), the sensitivity to variables are highly influenced by the distribution of land surface heterogeneity and upscaling methods. The climate-soil-vegetation interactions analyzed across all ecoregions are presented through a probability distribution function (PDF). The intensity of these interactions are categorized accordingly to yield "hotspots", where the RTM model fails to retrieve soil moisture. A ecoregion specific scaling function is proposed for these hotspots to rectify RTM for retrieving soil moisture.

  3. A sensitivity analysis of low salinity habitats simulated by a hydrodynamic model in the Manatee River estuary in Florida, USA

    NASA Astrophysics Data System (ADS)

    Chen, XinJian

    2012-06-01

    This paper presents a sensitivity study of simulated availability of low salinity habitats by a hydrodynamic model for the Manatee River estuary located in the southwest portion of the Florida peninsula. The purpose of the modeling study was to establish a regulatory minimum freshwater flow rate required to prevent the estuarine ecosystem from significant harm. The model used in the study was a multi-block model that dynamically couples a three-dimensional (3D) hydrodynamic model with a laterally averaged (2DV) hydrodynamic model. The model was calibrated and verified against measured real-time data of surface elevation and salinity at five stations during March 2005-July 2006. The calibrated model was then used to conduct a series of scenario runs to investigate effects of the flow reduction on salinity distributions in the Manatee River estuary. Based on simulated salinity distribution in the estuary, water volumes, bottom areas and shoreline lengths for salinity less than certain predefined values were calculated and analyzed to help establish the minimum freshwater flow rate for the estuarine system. The sensitivity analysis conducted during the modeling study for the Manatee River estuary examined effects of the bottom roughness, ambient vertical eddy viscosity/diffusivity, horizontal eddy viscosity/diffusivity, and ungauged flow on the model results and identified the relative importance of these model parameters (input data) to the outcome of the availability of low salinity habitats. It is found that the ambient vertical eddy viscosity/diffusivity is the most influential factor controlling the model outcome, while the horizontal eddy viscosity/diffusivity is the least influential one.

  4. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less

  5. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    NASA Technical Reports Server (NTRS)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  6. A comparison of experimental and theoretical results for leakage, pressure distribution, and rotordynamic coefficients for annular gas seals

    NASA Technical Reports Server (NTRS)

    Nicks, C. O.; Childs, D. W.

    1984-01-01

    The importance of seal behavior in rotordynamics is discussed and current annular seal theory is reviewed. A Nelson's analytical-computational method for determining rotordynamic coefficients for this type of compressible-flow seal is outlined. Various means for the experimental identification of the dynamic coefficients are given, and the method employed at the Texas A and M University (TAMU) test facility is explained. The TAMU test apparatus is described, and the test procedures are discussed. Experimental results, including leakage, entrance-loss coefficients, pressure distributions, and rotordynamic coefficients for a smooth and a honeycomb constant-clearance seal are presented and compared to theoretical results from Nelson's analysis. The results for both seals show little sensitivity to the running speed over the test range. Agreement between test results and theory for leakage through the seal is satisfactory. Test results for direct stiffness show a greater sensitivity to fluid pre-rotation than predicted. Results also indicate that the deliberately roughened surface of the honeycomb seal provides improved stability versus the smooth seal.

  7. Sequential fuzzy diagnosis method for motor roller bearing in variable operating conditions based on vibration analysis.

    PubMed

    Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi

    2013-06-21

    A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.

  8. Uncertainty of High Intensity Therapeutic Ultrasound (HITU) Field Characterization with Hydrophones: Effects of Nonlinearity, Spatial Averaging, and Complex Sensitivity

    PubMed Central

    Liu, Yunbo; Wear, Keith A.; Harris, Gerald R.

    2017-01-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement uncertainty and signal analysis still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small PVDF capsule hydrophone and two different fiber-optic hydrophones. The focal waveform and beam distribution of a single element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveform. Compressional pressure, rarefactional pressure, and focal beam distribution were compared up to 10.6/−6.0 MPa (p+ and p−) (1.05 MHz) and 20.65/−7.20 MPa (3.3 MHz). In particular, the effects of spatial averaging, local nonlinear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed an uncertainty of no better than 10–15% on hydrophone-based HITU pressure characterization. PMID:28735734

  9. Sequential Fuzzy Diagnosis Method for Motor Roller Bearing in Variable Operating Conditions Based on Vibration Analysis

    PubMed Central

    Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi

    2013-01-01

    A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021

  10. Measurement of Reconstructed Charged Particle Multiplicities of Neutrino Interactions in MicroBooNE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafique, Aleena

    2017-09-25

    Here, we compare the observed charged particle multiplicity distributions in the MicroBooNE liquid argon time projection chamber from neutrino interactions in a restricted final state phase space to predictions of this distribution from several GENIE models. The measurement uses a data sample consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2015-2016 with the Fermilab Booster Neutrino Beam (BNB), which has an average neutrino energy of 800 MeV, using an exposure corresponding to 5e19 protons-on-target. The analysis employs fully automatic event selection and charged particle track reconstruction andmore » uses a data-driven technique to determine the contribution to each multiplicity bin from neutrino interactions and cosmic-induced backgrounds. The restricted phase space employed makes the measurement most sensitive to the higher-energy charged particles expected from primary neutrino-argon collisions and less sensitive to lower energy protons expected to be produced in final state interactions of collision products with the target argon nucleus.« less

  11. Effect of temperature on the performances and in situ polarization analysis of zinc-nickel single flow batteries

    NASA Astrophysics Data System (ADS)

    Cheng, Yuanhui; Zhang, Huamin; Lai, Qinzhi; Li, Xianfeng; Zheng, Qiong; Xi, Xiaoli; Ding, Cong

    2014-03-01

    The recently proposed high power density zinc-nickel single flow batteries (ZNBs) exhibit great potential for larger scale energy storage. The urgent needs are in the research into temperature adaptability of ZNBs before practical utilization. Furthermore, making clear their polarization distribution is essential to direct the further improvement of battery performance. Here, we focus on the trends in the polarization distribution and effect of temperature on the performance of ZNBs. The result shows that ZNBs can operate in the temperature range from 0 °C to 40 °C with acceptable energy efficiency (53%-79.1%) at 80 mA cm-2. The temperature sensitivity of coulombic efficiency and energy efficiency are 0.65% °C-1 and 0.98% °C-1 at 0 °C-20 °C, respectively. The positive polarization is much larger than the negative polarization at all studied temperatures. The charge overpotential of the positive electrode is more sensitive to temperature. These results enable us to better evaluate the application prospect of ZNBs and point a clear struggling orientation to further improve the battery performance.

  12. Effects of assumed tow architecture on the predicted moduli and stresses in woven composites

    NASA Technical Reports Server (NTRS)

    Chapman, Clinton Dane

    1994-01-01

    This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.

  13. Dynamics and density distributions in a capillary-discharge waveguide with an embedded supersonic jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matlis, N. H., E-mail: nmatlis@gmail.com; Gonsalves, A. J.; Steinke, S.

    We present an analysis of the gas dynamics and density distributions within a capillary-discharge waveguide with an embedded supersonic jet. This device provides a target for a laser plasma accelerator which uses longitudinal structuring of the gas-density profile to enable control of electron trapping and acceleration. The functionality of the device depends sensitively on the details of the density profile, which are determined by the interaction between the pulsed gas in the jet and the continuously-flowing gas in the capillary. These dynamics are captured by spatially resolving recombination light from several emission lines of the plasma as a function ofmore » the delay between the jet and the discharge. We provide a phenomenological description of the gas dynamics as well as a quantitative evaluation of the density evolution. In particular, we show that the pressure difference between the jet and the capillary defines three regimes of operation with qualitatively different longitudinal density profiles and show that jet timing provides a sensitive method for tuning between these regimes.« less

  14. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  15. Does Sensitivity to Magnitude Depend on the Temporal Distribution of Reinforcement?

    ERIC Educational Resources Information Center

    Grace, Randolph C.; Bragason, Orn

    2005-01-01

    Our research addressed the question of whether sensitivity to relative reinforcer magnitude in concurrent chains depends on the distribution of reinforcer delays when the terminal-link schedules are equal. In Experiment 1, 12 pigeons responded in a two-component procedure. In both components, the initial links were concurrent variable-interval 40…

  16. Ultra-high resolution, polarization sensitive transversal optical coherence tomography for structural analysis and strain mapping

    NASA Astrophysics Data System (ADS)

    Wiesauer, Karin; Pircher, Michael; Goetzinger, Erich; Hitzenberger, Christoph K.; Engelke, Rainer; Ahrens, Gisela; Pfeiffer, Karl; Ostrzinski, Ute; Gruetzner, Gabi; Oster, Reinhold; Stifter, David

    2006-02-01

    Optical coherence tomography (OCT) is a contactless and non-invasive technique nearly exclusively applied for bio-medical imaging of tissues. Besides the internal structure, additionally strains within the sample can be mapped when OCT is performed in a polarization sensitive (PS) way. In this work, we demonstrate the benefits of PS-OCT imaging for non-biological applications. We have developed the OCT technique beyond the state-of-the-art: based on transversal ultra-high resolution (UHR-)OCT, where an axial resolution below 2 μm within materials is obtained using a femtosecond laser as light source, we have modified the setup for polarization sensitive measurements (transversal UHR-PS-OCT). We perform structural analysis and strain mapping for different types of samples: for a highly strained elastomer specimen we demonstrate the necessity of UHR-imaging. Furthermore, we investigate epoxy waveguide structures, photoresist moulds for the fabrication of micro-electromechanical parts (MEMS), and the glass-fibre composite outer shell of helicopter rotor blades where cracks are present. For these examples, transversal scanning UHR-PS-OCT is shown to provide important information about the structural properties and the strain distribution within the samples.

  17. Establishing the Capability of a 1D SVAT Modelling Scheme in Predicting Key Biophysical Vegetation Characterisation Parameters

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah

    2015-04-01

    Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.

  18. Modeling biomass gasification in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Miao, Qi

    In this thesis, the modeling of biomass gasification in circulating fluidized beds was studied. The hydrodynamics of a circulating fluidized bed operating on biomass particles were first investigated, both experimentally and numerically. Then a comprehensive mathematical model was presented to predict the overall performance of a 1.2 MWe biomass gasification and power generation plant. A sensitivity analysis was conducted to test its response to several gasifier operating conditions. The model was validated using the experimental results obtained from the plant and two other circulating fluidized bed biomass gasifiers (CFBBGs). Finally, an ASPEN PLUS simulation model of biomass gasification was presented based on minimization of the Gibbs free energy of the reaction system at chemical equilibrium. Hydrodynamics plays a crucial role in defining the performance of gas-solid circulating fluidized beds (CFBs). A 2-dimensional mathematical model was developed considering the hydrodynamic behavior of CFB gasifiers. In the modeling, the CFB riser was divided into two regions: a dense region at the bottom and a dilute region at the top of the riser. Kunii and Levenspiel (1991)'s model was adopted to express the vertical solids distribution with some other assumptions. Radial distributions of bed voidage were taken into account in the upper zone by using Zhang et al. (1991)'s correlation. For model validation purposes, a cold model CFB was employed, in which sawdust was transported with air as the fluidizing agent. A comprehensive mathematical model was developed to predict the overall performance of a 1.2 MWe biomass gasification and power generation demonstration plant in China. Hydrodynamics as well as chemical reaction kinetics were considered. The fluidized bed riser was divided into two distinct sections: (a) a dense region at the bottom of the bed where biomass undergoes mainly heterogeneous reactions and (b) a dilute region at the top where most of homogeneous reactions occur in gas phase. Each section was divided into a number of small cells, over which mass and energy balances were applied. Due to the high heating rate in circulating fluidized bed, the pyrolysis was considered instantaneous. A number of homogeneous and heterogeneous reactions were considered in the model. Mass transfer resistance was considered negligible since the reactions were under kinetic control due to good gas-solid mixing. The model is capable of predicting the bed temperature distribution along the gasifier, the concentration and distribution of each species in the vertical direction of the bed, the composition and lower heating value (LHV) of produced gas, the gasification efficiency, the overall carbon conversion and the produced gas production rate. A sensitivity analysis was performed to test its response to several gasifier operating conditions. The model sensitivity analysis showed that equivalence ratio (ER), bed temperature, fluidization velocity, biomass feed rate and moisture content had various effects on the gasifier performance. However, the model was more sensitive to variations in ER and bed temperature. The model was validated using the experimental results obtained from the demonstration plant. The reactor was operated on rice husk at various ERs, fluidization velocities and biomass feed rates. The model gave reasonable predictions. The model was also validated by comparing the simulation results with two other different size CFBBGs using different biomass feedstock, and it was concluded that the developed model can be applied to other CFBBGs using various biomass fuels and having comparable reactor geometries. A thermodynamic model was developed under ASPEN PLUS environment. Using the approach of Gibbs free energy minimization, the model was essentially independent of kinetic parameters. A sensitivity analysis was performed on the model to test its response to operating variables, including ER and biomass moisture content. The results showed that the ER has the most effect on the product gas composition and LHV. The simulation results were compared with the experimental data obtained from the demonstration plant. Keywords: Biomass gasification; Mathematical model; Circulating fluidized bed; Hydrodynamics; Kinetics; Sensitivity analysis; Validation; Equivalence ratio; Temperature; Feed rate; Moisture; Syngas composition; Lower heating value; Gasification efficiency; Carbon conversion

  19. Maritime Transportation Risk Assessment of Tianjin Port with Bayesian Belief Networks.

    PubMed

    Zhang, Jinfen; Teixeira, Ângelo P; Guedes Soares, C; Yan, Xinping; Liu, Kezhong

    2016-06-01

    This article develops a Bayesian belief network model for the prediction of accident consequences in the Tianjin port. The study starts with a statistical analysis of historical accident data of six years from 2008 to 2013. Then a Bayesian belief network is constructed to express the dependencies between the indicator variables and accident consequences. The statistics and expert knowledge are synthesized in the Bayesian belief network model to obtain the probability distribution of the consequences. By a sensitivity analysis, several indicator variables that have influence on the consequences are identified, including navigational area, ship type and time of the day. The results indicate that the consequences are most sensitive to the position where the accidents occurred, followed by time of day and ship length. The results also reflect that the navigational risk of the Tianjin port is at the acceptable level, despite that there is more room of improvement. These results can be used by the Maritime Safety Administration to take effective measures to enhance maritime safety in the Tianjin port. © 2016 Society for Risk Analysis.

  20. Impacts of Environmental Heterogeneity on Moss Diversity and Distribution of Didymodon (Pottiaceae) in Tibet, China.

    PubMed

    Song, Shanshan; Liu, Xuehua; Bai, Xueliang; Jiang, Yanbin; Zhang, Xianzhou; Yu, Chengqun; Shao, Xiaoming

    2015-01-01

    Tibet makes up the majority of the Qinghai-Tibet Plateau, often referred to as the roof of the world. Its complex landforms, physiognomy, and climate create a special heterogeneous environment for mosses. Each moss species inhabits its own habitat and ecological niche. This, in combination with its sensitivity to environmental change, makes moss species distribution a useful indicator of vegetation alteration and climate change. This study aimed to characterize the diversity and distribution of Didymodon (Pottiaceae) in Tibet, and model the potential distribution of its species. A total of 221 sample plots, each with a size of 10 × 10 m and located at different altitudes, were investigated across all vegetation types. Of these, the 181 plots in which Didymodon species were found were used to conduct analyses and modeling. Three noteworthy results were obtained. First, a total of 22 species of Didymodon were identified. Among these, Didymodon rigidulus var. subulatus had not previously been recorded in China, and Didymodon constrictus var. constrictus was the dominant species. Second, analysis of the relationships between species distributions and environmental factors using canonical correspondence analysis revealed that vegetation cover and altitude were the main factors affecting the distribution of Didymodon in Tibet. Third, based on the environmental factors of bioclimate, topography and vegetation, the distribution of Didymodon was predicted throughout Tibet at a spatial resolution of 1 km, using the presence-only MaxEnt model. Climatic variables were the key factors in the model. We conclude that the environment plays a significant role in moss diversity and distribution. Based on our research findings, we recommend that future studies should focus on the impacts of climate change on the distribution and conservation of Didymodon.

  1. Impacts of Environmental Heterogeneity on Moss Diversity and Distribution of Didymodon (Pottiaceae) in Tibet, China

    PubMed Central

    Song, Shanshan; Bai, Xueliang; Jiang, Yanbin; Zhang, Xianzhou; Yu, Chengqun

    2015-01-01

    Tibet makes up the majority of the Qinghai-Tibet Plateau, often referred to as the roof of the world. Its complex landforms, physiognomy, and climate create a special heterogeneous environment for mosses. Each moss species inhabits its own habitat and ecological niche. This, in combination with its sensitivity to environmental change, makes moss species distribution a useful indicator of vegetation alteration and climate change. This study aimed to characterize the diversity and distribution of Didymodon (Pottiaceae) in Tibet, and model the potential distribution of its species. A total of 221 sample plots, each with a size of 10 × 10 m and located at different altitudes, were investigated across all vegetation types. Of these, the 181 plots in which Didymodon species were found were used to conduct analyses and modeling. Three noteworthy results were obtained. First, a total of 22 species of Didymodon were identified. Among these, Didymodon rigidulus var. subulatus had not previously been recorded in China, and Didymodon constrictus var. constrictus was the dominant species. Second, analysis of the relationships between species distributions and environmental factors using canonical correspondence analysis revealed that vegetation cover and altitude were the main factors affecting the distribution of Didymodon in Tibet. Third, based on the environmental factors of bioclimate, topography and vegetation, the distribution of Didymodon was predicted throughout Tibet at a spatial resolution of 1 km, using the presence-only MaxEnt model. Climatic variables were the key factors in the model. We conclude that the environment plays a significant role in moss diversity and distribution. Based on our research findings, we recommend that future studies should focus on the impacts of climate change on the distribution and conservation of Didymodon. PMID:26181326

  2. Simulating smoke transport from wildland fires with a regional-scale air quality model: sensitivity to spatiotemporal allocation of fire emissions.

    PubMed

    Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T

    2014-09-15

    Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  4. Evaluating the quality of NMR structures by local density of protons.

    PubMed

    Ban, Yih-En Andrew; Rudolph, Johannes; Zhou, Pei; Edelsbrunner, Herbert

    2006-03-01

    Evaluating the quality of experimentally determined protein structural models is an essential step toward identifying potential errors and guiding further structural refinement. Herein, we report the use of proton local density as a sensitive measure to assess the quality of nuclear magnetic resonance (NMR) structures. Using 256 high-resolution crystal structures with protons added and optimized, we show that the local density of different proton types display distinct distributions. These distributions can be characterized by statistical moments and are used to establish local density Z-scores for evaluating both global and local packing for individual protons. Analysis of 546 crystal structures at various resolutions shows that the local density Z-scores increase as the structural resolution decreases and correlate well with the ClashScore (Word et al. J Mol Biol 1999;285(4):1711-1733) generated by all atom contact analysis. Local density Z-scores for NMR structures exhibit a significantly wider range of values than for X-ray structures and demonstrate a combination of potentially problematic inflation and compression. Water-refined NMR structures show improved packing quality. Our analysis of a high-quality structural ensemble of ubiquitin refined against order parameters shows proton density distributions that correlate nearly perfectly with our standards derived from crystal structures, further validating our approach. We present an automated analysis and visualization tool for proton packing to evaluate the quality of NMR structures. 2005 Wiley-Liss, Inc.

  5. Distribution quantification on dermoscopy images for computer-assisted diagnosis of cutaneous melanomas.

    PubMed

    Liu, Zhao; Sun, Jiuai; Smith, Lyndon; Smith, Melvyn; Warr, Robert

    2012-05-01

    Computerised analysis on skin lesion images has been reported to be helpful in achieving objective and reproducible diagnosis of melanoma. In particular, asymmetry in shape, colour and structure reflects the irregular growth of melanin under the skin and is of great importance for diagnosing the malignancy of skin lesions. This paper proposes a novel asymmetry analysis based on a newly developed pigmentation elevation model and the global point signatures (GPSs). Specifically, the pigmentation elevation model was first constructed by computer-based analysis of dermoscopy images, for the identification of melanin and haemoglobin. Asymmetry of skin lesions was then assessed through quantifying distributions of the pigmentation elevation model using the GPSs, derived from a Laplace-Beltrami operator. This new approach allows quantifying the shape and pigmentation distributions of cutaneous lesions simultaneously. Algorithm performance was tested on 351 dermoscopy images, including 88 malignant melanomas and 263 benign naevi, employing a support vector machine (SVM) with tenfold cross-validation strategy. Competitive diagnostic results were achieved using the proposed asymmetry descriptor only, presenting 86.36 % sensitivity, 82.13 % specificity and overall 83.43 % accuracy, respectively. In addition, the proposed GPS-based asymmetry analysis enables working on dermoscopy images from different databases and is approved to be inherently robust to the external imaging variations. These advantages suggested that the proposed method has good potential for follow-up treatment.

  6. Ultra-short FBG based distributed sensing using shifted optical Gaussian filters and microwave-network analysis.

    PubMed

    Cheng, Rui; Xia, Li; Sima, Chaotan; Ran, Yanli; Rohollahnejad, Jalal; Zhou, Jiaao; Wen, Yongqiang; Yu, Can

    2016-02-08

    Ultrashort fiber Bragg gratings (US-FBGs) have significant potential as weak grating sensors for distributed sensing, but the exploitation have been limited by their inherent broad spectra that are undesirable for most traditional wavelength measurements. To address this, we have recently introduced a new interrogation concept using shifted optical Gaussian filters (SOGF) which is well suitable for US-FBG measurements. Here, we apply it to demonstrate, for the first time, an US-FBG-based self-referencing distributed optical sensing technique, with the advantages of adjustable sensitivity and range, high-speed and wide-range (potentially >14000 με) intensity-based detection, and resistance to disturbance by nonuniform parameter distribution. The entire system is essentially based on a microwave network, which incorporates the SOGF with a fiber delay-line between the two arms. Differential detections of the cascaded US-FBGs are performed individually in the network time-domain response which can be obtained by analyzing its complex frequency response. Experimental results are presented and discussed using eight cascaded US-FBGs. A comprehensive numerical analysis is also conducted to assess the system performance, which shows that the use of US-FBGs instead of conventional weak FBGs could significantly improve the power budget and capacity of the distributed sensing system while maintaining the crosstalk level and intensity decay rate, providing a promising route for future sensing applications.

  7. A PV view of the zonal mean distribution of temperature and wind in the extratropical troposphere

    NASA Technical Reports Server (NTRS)

    Sun, De-Zheng; Lindzen, Richard S.

    1994-01-01

    The dependence of the temperature and wind distribution of the zonal mean flow in the extratropical troposphere on the gradient of pontential vorticity along isentropes is examined. The extratropics here refer to the region outside the Hadley circulation. Of particular interest is whether the distribution of temperature and wind corresponding to a constant potential vorticity (PV) along isentropes resembles the observed, and the implications of PV homogenization along isentropes for the role of the tropics. With the assumption that PV is homogenized along isentropes, it is found that the temperature distribution in the extratropical troposphere may be determined by a linear, first-order partial differential equation. When the observed surface temperature distribution and tropical lapse rate are used as the boundary conditions, the solution of the equation is close to the observed temperature distribution except in the upper troposphere adjacent to the Hadley circulation, where the troposphere with no PV gradient is considerably colder. Consequently, the jet is also stronger. It is also found that the meridional distribution of the balanced zonal wind is very sensitive to the meridional distribution of the tropopause temperature. The result may suggest that the requirement of the global momentum balance has no practical role in determining the extratropical temperature distribution. The authors further investigated the sensitivity of the extratropical troposphere with constant PV along isentropes to changes in conditions at the tropical boundary (the edge of the Hadley circulation). It is found that the temperature and wind distributions in the extratropical troposphere are sensitive to the vertical distribution of PV at the tropical boundary. With a surface distribution of temperature that decreases linearly with latitude, the jet maximum occurs at the tropical boundary and moves with it. The overall pattern of wind distribution is not sensitive to the change of the position of the tropical boundary. Finally, the temperature and wind distributions of an extratropical troposphere with a finite PV gradient are calculated. It is found that the larger the isentropic PV gradient, the warmer the troposphere and the weaker the jet.

  8. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  9. Analysis and Application of the Bi-Directional Scatter Distribution Function of Photonic Crystals

    DTIC Science & Technology

    2009-03-01

    and reflected light ..................17 10. A CASI source box, showing the beam path, chopper , scaling photodetector, half-wave plate, and linear...off of a semi-reflective beam chopper , shown in Figure 10. Any variation in the output of the laser is detected by it, and the incident power is...box, showing the beam path, chopper , scaling photodetector, half-wave plate, and linear polarizers. 20 The CASI is not sensitive to ambient light

  10. Tornado damage risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhold, T.A.; Ellingwood, B.

    1982-09-01

    Several proposed models were evaluated for predicting tornado wind speed probabilities at nuclear plant sites as part of a program to develop statistical data on tornadoes needed for probability-based load combination analysis. A unified model was developed which synthesized the desired aspects of tornado occurrence and damage potential. The sensitivity of wind speed probability estimates to various tornado modeling assumptions are examined, and the probability distributions of tornado wind speed that are needed for load combination studies are presented.

  11. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  12. Stochastic techno-economic analysis of alcohol-to-jet fuel production.

    PubMed

    Yao, Guolin; Staples, Mark D; Malina, Robert; Tyner, Wallace E

    2017-01-01

    Alcohol-to-jet (ATJ) is one of the technical feasible biofuel technologies. It produces jet fuel from sugary, starchy, and lignocellulosic biomass, such as sugarcane, corn grain, and switchgrass, via fermentation of sugars to ethanol or other alcohols. This study assesses the ATJ biofuel production pathway for these three biomass feedstocks, and advances existing techno-economic analyses of biofuels in three ways. First, we incorporate technical uncertainty for all by-products and co-products though statistical linkages between conversion efficiencies and input and output levels. Second, future price uncertainty is based on case-by-case time-series estimation, and a local sensitivity analysis is conducted with respect to each uncertain variable. Third, breakeven price distributions are developed to communicate the inherent uncertainty in breakeven price. This research also considers uncertainties in utility input requirements, fuel and by-product outputs, as well as price uncertainties for all major inputs, products, and co-products. All analyses are done from the perspective of a private firm. The stochastic dominance results of net present values (NPV) and breakeven price distributions show that sugarcane is the lowest cost feedstock over the entire range of uncertainty with the least risks, followed by corn grain and switchgrass, with the mean breakeven jet fuel prices being $0.96/L ($3.65/gal), $1.01/L ($3.84/gal), and $1.38/L ($5.21/gal), respectively. The variation of revenues from by-products in corn grain pathway can significantly impact its profitability. Sensitivity analyses show that technical uncertainty significantly impacts breakeven price and NPV distributions. Technical uncertainty is critical in determining the economic performance of the ATJ fuel pathway. Technical uncertainty needs to be considered in future economic analyses. The variation of revenues from by-products plays a significant role in profitability. With the distribution of breakeven prices, potential investors can apply whatever risk preferences they like to determine an appropriate bid or breakeven price that matches their risk profile.

  13. Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.

    2008-12-01

    Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.

  14. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.

  15. A new approach for computing a flood vulnerability index using cluster analysis

    NASA Astrophysics Data System (ADS)

    Fernandez, Paulo; Mourato, Sandra; Moreira, Madalena; Pereira, Luísa

    2016-08-01

    A Flood Vulnerability Index (FloodVI) was developed using Principal Component Analysis (PCA) and a new aggregation method based on Cluster Analysis (CA). PCA simplifies a large number of variables into a few uncorrelated factors representing the social, economic, physical and environmental dimensions of vulnerability. CA groups areas that have the same characteristics in terms of vulnerability into vulnerability classes. The grouping of the areas determines their classification contrary to other aggregation methods in which the areas' classification determines their grouping. While other aggregation methods distribute the areas into classes, in an artificial manner, by imposing a certain probability for an area to belong to a certain class, as determined by the assumption that the aggregation measure used is normally distributed, CA does not constrain the distribution of the areas by the classes. FloodVI was designed at the neighbourhood level and was applied to the Portuguese municipality of Vila Nova de Gaia where several flood events have taken place in the recent past. The FloodVI sensitivity was assessed using three different aggregation methods: the sum of component scores, the first component score and the weighted sum of component scores. The results highlight the sensitivity of the FloodVI to different aggregation methods. Both sum of component scores and weighted sum of component scores have shown similar results. The first component score aggregation method classifies almost all areas as having medium vulnerability and finally the results obtained using the CA show a distinct differentiation of the vulnerability where hot spots can be clearly identified. The information provided by records of previous flood events corroborate the results obtained with CA, because the inundated areas with greater damages are those that are identified as high and very high vulnerability areas by CA. This supports the fact that CA provides a reliable FloodVI.

  16. Pinning down the large- x gluon with NNLO top-quark pair differential distributions

    NASA Astrophysics Data System (ADS)

    Czakon, Michał; Hartland, Nathan P.; Mitov, Alexander; Nocera, Emanuele R.; Rojo, Juan

    2017-04-01

    Top-quark pair production at the LHC is directly sensitive to the gluon PDF at large x. While total cross-section data is already included in several PDF determinations, differential distributions are not, because the corresponding NNLO calculations have become available only recently. In this work we study the impact on the large- x gluon of top-quark pair differential distributions measured by ATLAS and CMS at √{s}=8 TeV. Our analysis, performed in the NNPDF3.0 framework at NNLO accuracy, allows us to identify the optimal combination of LHC top-quark pair measurements that maximize the constraints on the gluon, as well as to assess the compatibility between ATLAS and CMS data. We find that differential distributions from top-quark pair production provide significant constraints on the large- x gluon, comparable to those obtained from inclusive jet production data, and thus should become an important ingredient for the next generation of global PDF fits.

  17. A Comparative Study of Distribution System Parameter Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less

  18. Spatial analysis of extension fracture systems: A process modeling approach

    USGS Publications Warehouse

    Ferguson, C.C.

    1985-01-01

    Little consensus exists on how best to analyze natural fracture spacings and their sequences. Field measurements and analyses published in geotechnical literature imply fracture processes radically different from those assumed by theoretical structural geologists. The approach adopted in this paper recognizes that disruption of rock layers by layer-parallel extension results in two spacing distributions, one representing layer-fragment lengths and another separation distances between fragments. These two distributions and their sequences reflect mechanics and history of fracture and separation. Such distributions and sequences, represented by a 2 ?? n matrix of lengthsL, can be analyzed using a method that is history sensitive and which yields also a scalar estimate of bulk extension, e (L). The method is illustrated by a series of Monte Carlo experiments representing a variety of fracture-and-separation processes, each with distinct implications for extension history. Resulting distributions of e (L)are process-specific, suggesting that the inverse problem of deducing fracture-and-separation history from final structure may be tractable. ?? 1985 Plenum Publishing Corporation.

  19. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  20. Inter-patient image registration algorithms to disentangle regional dose bioeffects.

    PubMed

    Monti, Serena; Pacelli, Roberto; Cella, Laura; Palma, Giuseppe

    2018-03-20

    Radiation therapy (RT) technological advances call for a comprehensive reconsideration of the definition of dose features leading to radiation induced morbidity (RIM). In this context, the voxel-based approach (VBA) to dose distribution analysis in RT offers a radically new philosophy to evaluate local dose response patterns, as an alternative to dose-volume-histograms for identifying dose sensitive regions of normal tissue. The VBA relies on mapping patient dose distributions into a single reference case anatomy which serves as anchor for local dosimetric evaluations. The inter-patient elastic image registrations (EIRs) of the planning CTs provide the deformation fields necessary for the actual warp of dose distributions. In this study we assessed the impact of EIR on the VBA results in thoracic patients by identifying two state-of-the-art EIR algorithms (Demons and B-Spline). Our analysis demonstrated that both the EIR algorithms may be successfully used to highlight subregions with dose differences associated with RIM that substantially overlap. Furthermore, the inclusion for the first time of covariates within a dosimetric statistical model that faces the multiple comparison problem expands the potential of VBA, thus paving the way to a reliable voxel-based analysis of RIM in datasets with strong correlation of the outcome with non-dosimetric variables.

  1. Theoretical Study of Monolayer and Double-Layer Waveguide Love Wave Sensors for Achieving High Sensitivity.

    PubMed

    Li, Shuangming; Wan, Ying; Fan, Chunhai; Su, Yan

    2017-03-22

    Love wave sensors have been widely used for sensing applications. In this work, we introduce the theoretical analysis of the monolayer and double-layer waveguide Love wave sensors. The velocity, particle displacement and energy distribution of Love waves were analyzed. Using the variations of the energy repartition, the sensitivity coefficients of Love wave sensors were calculated. To achieve a higher sensitivity coefficient, a thin gold layer was added as the second waveguide on top of the silicon dioxide (SiO₂) waveguide-based, 36 degree-rotated, Y-cut, X-propagating lithium tantalate (36° YX LiTaO₃) Love wave sensor. The Love wave velocity was significantly reduced by the added gold layer, and the flow of wave energy into the waveguide layer from the substrate was enhanced. By using the double-layer structure, almost a 72-fold enhancement in the sensitivity coefficient was achieved compared to the monolayer structure. Additionally, the thickness of the SiO₂ layer was also reduced with the application of the gold layer, resulting in easier device fabrication. This study allows for the possibility of designing and realizing robust Love wave sensors with high sensitivity and a low limit of detection.

  2. Behavioral profiles of feline breeds in Japan.

    PubMed

    Takeuchi, Yukari; Mori, Yuji

    2009-08-01

    To clarify the behavioral profiles of 9 feline purebreds, 2 Persian subbreeds and the Japanese domestic cat, a questionnaire survey was distributed to 67 small-animal veterinarians. We found significant differences among breeds in all behavioral traits examined except for "inappropriate elimination". In addition, sexual differences were observed in certain behaviors, including "aggression toward cats", "general activity", "novelty-seeking", and "excitability". These behaviors were more common in males than females, whereas "nervousness" and "inappropriate elimination" were rated higher in females. When all breeds were categorized into four groups on the basis of a cluster analysis using the scores of two behavioral trait factors called "aggressiveness/sensitivity" and "vivaciousness", the group including Abyssinian, Russian Blue, Somali, Siamese, and Chinchilla breeds showed high aggressiveness/sensitivity and low vivaciousness. In contrast, the group including the American Shorthair and Japanese domestic cat displayed low aggressiveness/sensitivity and high vivaciousness, and the Himalayan and Persian group showed mild aggressiveness/sensitivity and very low vivaciousness. Finally, the group containing Maine Coon, Ragdoll, and Scottish Fold breeds displayed very low aggressiveness/sensitivity and low vivaciousness. The present results demonstrate that some feline behavioral traits vary by breed and/or sex.

  3. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  4. MMASS: an optimized array-based method for assessing CpG island methylation.

    PubMed

    Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D

    2006-01-01

    We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.

  5. Uniwavelength lidar sensitivity to spherical aerosol microphysical properties for the interpretation of Lagrangian stratospheric observations

    NASA Astrophysics Data System (ADS)

    Jumelet, Julien; David, Christine; Bekki, Slimane; Keckhut, Philippe

    2009-01-01

    The determination of stratospheric particle microphysical properties from multiwavelength lidar, including Rayleigh and/or Raman detection, has been widely investigated. However, most lidar systems are uniwavelength operating at 532 nm. Although the information content of such lidar data is too limited to allow the retrieval of the full size distribution, the coupling of two or more uniwavelength lidar measurements probing the same moving air parcel may provide some meaningful size information. Within the ORACLE-O3 IPY project, the coordination of several ground-based lidars and the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) space-borne lidar is planned during measurement campaigns called MATCH-PSC (Polar Stratospheric Clouds). While probing the same moving air masses, the evolution of the measured backscatter coefficient (BC) should reflect the variation of particles microphysical properties. A sensitivity study of 532 nm lidar particle backscatter to variations of particles size distribution parameters is carried out. For simplicity, the particles are assumed to be spherical (liquid) particles and the size distribution is represented with a unimodal log-normal distribution. Each of the four microphysical parameters (i.e. log-normal size distribution parameters, refractive index) are analysed separately, while the three others are remained set to constant reference values. Overall, the BC behaviour is not affected by the initial values taken as references. The total concentration (N0) is the parameter to which BC is least sensitive, whereas it is most sensitive to the refractive index (m). A 2% variation of m induces a 15% variation of the lidar BC, while the uncertainty on the BC retrieval can also reach 15%. This result underlines the importance of having both an accurate lidar inversion method and a good knowledge of the temperature for size distribution retrieval techniques. The standard deviation ([sigma]) is the second parameter to which BC is most sensitive to. Yet, the impact of m and [sigma] on BC variations is limited by the realistic range of their variations. The mean radius (rm) of the size distribution is thus the key parameter for BC, as it can vary several-fold. BC is most sensitive to the presence of large particles. The sensitivity of BC to rm and [sigma] variations increases when the initial size distributions are characterized by low rm and large [sigma]. This makes lidar more suitable to detect particles growing on background aerosols than on volcanic aerosols.

  6. Separating foliar physiology from morphology reveals the relative roles of vertically structured transpiration factors within red maple crowns and limitations of larger scale models

    PubMed Central

    Bauerle, William L.; Bowden, Joseph D.

    2011-01-01

    A spatially explicit mechanistic model, MAESTRA, was used to separate key parameters affecting transpiration to provide insights into the most influential parameters for accurate predictions of within-crown and within-canopy transpiration. Once validated among Acer rubrum L. genotypes, model responses to different parameterization scenarios were scaled up to stand transpiration (expressed per unit leaf area) to assess how transpiration might be affected by the spatial distribution of foliage properties. For example, when physiological differences were accounted for, differences in leaf width among A. rubrum L. genotypes resulted in a 25% difference in transpiration. An in silico within-canopy sensitivity analysis was conducted over the range of genotype parameter variation observed and under different climate forcing conditions. The analysis revealed that seven of 16 leaf traits had a ≥5% impact on transpiration predictions. Under sparse foliage conditions, comparisons of the present findings with previous studies were in agreement that parameters such as the maximum Rubisco-limited rate of photosynthesis can explain ∼20% of the variability in predicted transpiration. However, the spatial analysis shows how such parameters can decrease or change in importance below the uppermost canopy layer. Alternatively, model sensitivity to leaf width and minimum stomatal conductance was continuous along a vertical canopy depth profile. Foremost, transpiration sensitivity to an observed range of morphological and physiological parameters is examined and the spatial sensitivity of transpiration model predictions to vertical variations in microclimate and foliage density is identified to reduce the uncertainty of current transpiration predictions. PMID:21617246

  7. Scanning laser densitometry and color perimetry demonstrate reduced photopigment density and sensitivity in two patients with retinal degeneration.

    PubMed

    Tornow, R P; Stilling, R; Zrenner, E

    1999-10-01

    To test the feasibility of scanning laser densitometry with a modified Rodenstock scanning laser ophthalmoscope (SLO) to measure the rod and cone photopigment distribution in patients with retinal diseases. Scanning laser densitometry was performed using a modified Rodenstock scanning laser ophthalmoscope. The distribution of the photopigments was calculated from dark adapted and bleached images taken with the 514 nm laser of the SLO. This wavelength is absorbed by rod and cone photopigments. Discrimination is possible due to their different spatial distribution. Additionally, to measure retinal sensitivity profiles, dark adapted two color static perimetry with a Tübinger manual perimeter was performed along the horizontal meridian with 1 degree spacing. A patient with retinitis pigmentosa had slightly reduced photopigment density within the central +/- 5 degrees but no detectable photopigment for eccentricities beyond 5 degrees. A patient with cone dystrophy had nearly normal pigment density beyond +/- 5 degrees, but considerably reduced photopigment density within the central +/- 5 degrees. Within the central +/- 5 degrees, the patient with retinitis pigmentosa had normal sensitivity for the red stimulus and reduced sensitivity for the green stimulus. There was no measurable function beyond 7 degrees. The patient with cone dystrophy had normal sensitivity for the green stimulus outside the foveal center and reduced sensitivity for the red stimulus at the foveal center. The results of color perimetry for this patient with a central scotoma were probably influenced by eccentric fixation. Scanning laser densitometry with a modified Rodenstock SLO is a useful method to assess the human photopigment distribution. Densitometry results were confirmed by dark adapted two color static perimetry. Photopigment distribution and retinal sensitivity profiles can be measured with high spatial resolution. This may help to measure exactly the temporal development of retinal diseases and to test the success of different therapeutic treatments. Both methods have limitations at the present state of development. However, some of these limitations can be overcome by further improving the instruments.

  8. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  9. Binaural sensitivity changes between cortical on and off responses

    PubMed Central

    Dahmen, Johannes C.; King, Andrew J.; Schnupp, Jan W. H.

    2011-01-01

    Neurons exhibiting on and off responses with different frequency tuning have previously been described in the primary auditory cortex (A1) of anesthetized and awake animals, but it is unknown whether other tuning properties, including sensitivity to binaural localization cues, also differ between on and off responses. We measured the sensitivity of A1 neurons in anesthetized ferrets to 1) interaural level differences (ILDs), using unmodulated broadband noise with varying ILDs and average binaural levels, and 2) interaural time delays (ITDs), using sinusoidally amplitude-modulated broadband noise with varying envelope ITDs. We also assessed fine-structure ITD sensitivity and frequency tuning, using pure-tone stimuli. Neurons most commonly responded to stimulus onset only, but purely off responses and on-off responses were also recorded. Of the units exhibiting significant binaural sensitivity nearly one-quarter showed binaural sensitivity in both on and off responses, but in almost all (∼97%) of these units the binaural tuning of the on responses differed significantly from that seen in the off responses. Moreover, averaged, normalized ILD and ITD tuning curves calculated from all units showing significant sensitivity to binaural cues indicated that on and off responses displayed different sensitivity patterns across the population. A principal component analysis of ITD response functions suggested a continuous cortical distribution of binaural sensitivity, rather than discrete response classes. Rather than reflecting a release from inhibition without any functional significance, we propose that binaural off responses may be important to cortical encoding of sound-source location. PMID:21562191

  10. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  11. Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weitzel, E.; Hoeschele, M.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated,more » distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.« less

  12. [Element distribution analysis of welded fusion zone by laser-induced breakdown spectroscopy].

    PubMed

    Yang, Chun; Zhang, Yong; Jia, Yun-Hai; Wang, Hai-Zhou

    2014-04-01

    Over the past decade there has been intense activity in the study and development of laser-induced breakdown spectroscopy (LIBS). As a new tool for surface microanalysis, it caused widespread in materials science because of the advantage of rapid and high sensitivity. In the present paper, the distribution of Ni, Mn, C and Si near weld fusion line was analyzed on two kinds of weld sample. Line scanning mode analysis was carried out by three different kinds of methods, namely laser-induced breakdown spectroscopy (LIBS), scanning electron microscope/energy dispersive spectrometer (SEM/EDS) and electron probe X-ray microanalyser (EPMA). The concentration variation trend of Ni and Mn acquired by LIBS is coincident with SEM/EDS and EPMA. The result shows that the content of Ni and Mn was significantly different between weld seam and base metal on both the samples. The content of Ni and Mn was much higher in weld seam than in base metal, and a sharp concentration gradient was analyzed in the fusion zone. According to the distribution of Ni and Mn, all the three methods got a similar value of welded fusion zone width. The concentration variation trend of C and Si acquired by LIBS is not coincident with SEM/EDS and EPMA. The concentration difference between weld seam and base metal was analyzed by LIBS, but had not by SEM/EDS and EPMA, because of the low concentration and slight difference. The concentration gradient of C and Si in fusion zone was shows clearly by LIBS. For higher sensitivity performance, LIBS is much more adapted to analyze low content element than SEM/EDS and EPMA.

  13. Development, application, and sensitivity analysis of a water quality index for drinking water management in small systems.

    PubMed

    Scheili, A; Rodriguez, Manuel J; Sadiq, R

    2015-11-01

    The aim of this study was to produce a drinking water assessment tool for operators of small distribution systems. A drinking water quality index (DWQI) was developed and applied to small systems based on the water quality index of the Canadian Council of Ministers of Environment. The drinking water quality index was adapted to specific needs by creating four drinking water quality scenarios. First, the temporal and spatial dimensions of drinking water quality variability were taken into account. The DWQI was designed to express global drinking water quality according to different monitoring frequencies. Daily, monthly, and seasonal assessment was also considered. With the data made available, it was possible to use the index as a spatial monitoring tool and express water quality in different points in the distribution system. Moreover, adjustments were made to prioritize the type of contaminant to monitor. For instance, monitoring contaminants with acute health effects led to a scenario based on daily measures, including easily accessible and affordable water quality parameters. On the other hand, contaminants with chronic effects, especially disinfection by-products, were considered in a seasonal monitoring scenario where disinfection by-product reference values were redefined according to their seasonal variability. A sensitivity analysis was also carried out to validate the index. Globally, the DWQI developed is adapted to the needs of small systems. In fact, expressing drinking water quality using the DWQI contributes to the identification of problematic periods and segments in the distribution system. Further work may include this index in the development of a customized decision-making tool for small-system operators and managers.

  14. Cell-free DNA fragment-size distribution analysis for non-invasive prenatal CNV prediction.

    PubMed

    Arbabi, Aryan; Rampášek, Ladislav; Brudno, Michael

    2016-06-01

    Non-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome. We have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%. Our source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. [Effects of Tillage on Distribution of Heavy Metals and Organic Matter Within Purple Paddy Soil Aggregates].

    PubMed

    Shi, Qiong-bin; Zhao, Xiu-lan; Chang, Tong-ju; Lu, Ji-wen

    2016-05-15

    A long-term experiment was utilized to study the effects of tillage methods on the contents and distribution characteristics of organic matter and heavy metals (Cu, Zn, Pb, Cd, Fe and Mn) in aggregates with different sizes (including 1-2, 0.25-1, 0.05-0.25 mm and < 0.05 mm) in a purple paddy soil under two tillage methods including flooded paddy field (FPF) and paddy-upland rotation (PR). The relationship between heavy metals and organic matter in soil aggregates was also analyzed. The results showed that the aggregates of two tillage methods were dominated by 0.05-0.25 mm and < 0.05 mm particle size, respectively. The contents of organic matter in each aggregate decreased with the decrease of aggregate sizes, however, compared to PR, FPF could significantly increase the contents of organic matter in soils and aggregates. The tillage methods did not significantly affect the contents of heavy metals in soils, but FPF could enhance the accumulation and distribution of aggregate, organic matter and heavy metals in aggregates with diameters of 1-2 mm and 0.25-1 mm. Correlation analysis found that there was a negative correlation between the contents of heavy metals and organic matter in soil aggregates, but a positive correlation between the amounts of heavy metal and organic matter accumulated in soil aggregates. From the slope of the correlation analysis equations, we could found that the sensitivities of heavy metals to the changes of soil organic matters followed the order of Mn > Zn > Pb > Cu > Fe > Cd under the same tillage. When it came to the same heavy metal, it was more sensitive in PR than in FPF.

  16. Geotechnical properties of ash deposits near Hilo, Hawaii

    USGS Publications Warehouse

    Wieczorek, G.F.; Jibson, R.W.; Wilson, R.C.; Buchanan-Banks, J. M.

    1982-01-01

    Two holes were hand augered and sampled in ash deposits near Hilo, Hawaii. Color, water content and sensitivity of the ash were measured in the field. The ash alternated between reddish brown and dark reddish brown in color and had water contents as high as 392%. A downhole vane shear device measured sensitivities as high as 6.9. A series of laboratory tests including grain size distribution, Atterberg limits, X-ray diffraction analysis, total carbon determination, vane shear, direct shear and triaxial tests were performed to determine the composition and geotechnical properties of the ash. The ash is very fine grained, highly plastic and composed mostly of gibbsite and amorphous material presumably allophane. The ash has a high angle of internal friction ranging from 40-43? and is classified as medium to very sensitive. A series of different ash layers was distinguished on the basis of plasticity and other geotechnical properties. Sensitivity may be due to a metastable fabric, cementation, leaching, high organic content, and thixotropy. The sensitivity of the volcanic ash deposits near Hilo is consistent with documented slope instability during earthquakes in Hawaii. The high angles of internal friction and cementation permit very steep slopes under static conditions. However, because of high sensitivity of the ash, these slopes are particularly susceptible to seismically-induced landsliding.

  17. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    PubMed

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  18. Four dimensional data assimilation (FDDA) impacts on WRF performance in simulating inversion layer structure and distributions of CMAQ-simulated winter ozone concentrations in Uintah Basin

    NASA Astrophysics Data System (ADS)

    Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik

    2018-03-01

    Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.

  19. A Python Interface for the Dakota Iterative Systems Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E.; Syvitski, J. P.

    2016-12-01

    Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.

  20. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

Top